I have a luminescence alpha texture where each pixel has two bytes, the first byte is luminescence and the second byte is alpha.
Is it possible to upload this into the GPU such that (within the fragment shader) the alpha component will always refer to alpha, regardless of the texture used, without wasting texture memory for green and blue channels?
// fragment shader
vec4 px = texture(tex, uv);
px.r; // is luminescence
px.a; // is alpha (note that .a is used, not .g which would be green for 32bit RGBA textures)
The reason I want to avoid using GL_RG as the internal format, is because this would require I use a separate codepath within the shader for 32bit RGBA textures, and I'd like to avoid setting uniforms / conditionals for different texture types.
(If common GPUs are able to eliminate memory overhead of the G and B channels if they're empty/redundant for RGBA internal formats, that would be okay too)
The reason I want to avoid using GL_RG as the internal format, is because this would require I use a separate codepath within the shader for 32bit RGBA textures
The internal format defines the components stored in the texture. What you fetch however can be adjusted by using a swizzle mask for the texture. To emulate a luminance/alpha with RG, you'd do this:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_GREEN};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Related
I'd like to access framebuffer to get RGB and change their values for each pixel. It is because the glReadPixels, and glDrawPixels are too slow to use, so that i should use shaders instead of using them.
Now, I write code, and success to display three-dimensional model using GLSL shaders.
I drew two cubes as follows.
....
glDrawArrays(GL_TRIANGLES, 0, 12*6);
....
and fragment shader :
varying vec3 fragmentColor;
void main()
{
gl_FragColor = vec4(fragmentColor, 1);
}
Then, how can I access to RGB values and change it?
For example, If the pixel values at (u1, v1) on window and (u2, v2) are (0,0,255), then I want to change them to (255,0,0)
With the exception of an OpenGL ES-only extension, fragment shaders cannot just read from the current framebuffer. Otherwise, we wouldn't need blending.
You also can't just render to the image you're reading from in a shader. So if you need to do some sort of post-processing, then that is best done by rendering to a separate image. That is, you do your rendering to image 1, then bind that as a texture and change the FBO so that you're rendering to image 2.
Alternatively, if you have access to OpenGL 4.5/ARB/NV_texture_barrier, then you can use texture barriers to handle this. This permits you a single read/modify/write pass, if you bind the current framebuffer's image as a texture. You'd issue the barrier before doing your read/modify/write, then bind that texture to a sampler while still rendering to that framebuffer.
Also, this requires that the FS read from the exact texel that it would write to. Assuming a viewport anchored at 0,0, the code for this would be texelFetch(sampler, ivec2(gl_FragCoord.xy), 0). You can't read from someone else's texel and modify it.
Obviously you must be rendering to a texture; you cannot use the default framebuffer for this.
Texture barrier could be used for cases where you read from different texels than you write to. But that would require doing something similar to the first case of switching bound images. Though you wouldn't need to change the FBO exactly; you could change the region of the FBO that you render to. That is, so long as you're reading from a different area than you're rendering to, and you use barriers appropriately when switching between those regions, everything is fine.
Is it possible to use glBlitFramebuffer to copy the alpha component from a read framebuffer with RGBA color attachment to a red component of draw framebuffer with R8 color attachment? If not, how would you do this?
Apparently the swizzle mask isn't used by glBlitFramebuffer.
The only way to do this (without pulling the memory from the GPU back to the CPU) is with some form of rendering operation. A blit can't do it.
You can use a compute shader, if that's available. Just bind the source and destination images via Image Load Store and read/write to them based on the compute shader's invocation index.
However, it's not clear exactly what you hope to gain by doing this copy operation. If you only plan to read from the GL_R8 texture as a bound texture/image, then all you need to do is create a view texture of the RGBA image. The reason to create a view of it is to be able to give the view a different swizzle mask. Simply swizzle the alpha into the red, and set green/blue/alpha to 0,0,1.
Obviously it won't be as efficient to access as a true GL_R8 texture. But you won't have to execute a potentially expensive copy operation either.
Using GL 4.5 Direct State Access calls:
GLuint alpha_tex;
glCreateTexture(GL_TEXTURE_2D, &alpha_tex);
glTextureView(alpha_tex, GL_TEXTURE_2D, rgba_tex, GL_RGBA8, 0, num_mipmaps(rgba_tex), 0, 1)
GLint swizzleMask[] = {GL_ALPHA, GL_ZERO, GL_ZERO, GL_ONE};
glTexureParameteriv(alpha_tex, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);
I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.