I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.
Related
I have to create animation where gatling gun will be shoot (it doesn't have to be complex, cause it's just a practice). I drew basic version of my gun which looks like this:
Don't bother colors - i made them like that to be able to see where are egdes of particular parts of gun. Now i would like to make it look better by using some texture - moro or something like metalic color - example 1 or example2. I know how to load texture and how to use it for 2d objects, but i have no idea if there is a possiblity to use this texture for my whole drawing or do i have to use texture for every part separately? This is my code which corresponds to load texture from bmp file and make it able to use:
void initTexture(string fileName)
{
loadBmp(fileName.c_str());
textureId = 13;
glBindTexture(GL_TEXTURE_2D, textureId); //Tell OpenGL which texture to edit
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//Map the image to the texture
glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D
0, //0 for now
GL_RGB, //Format OpenGL uses for image
tex.info.biWidth, tex.info.biHeight, //Width and height
0, //The border of the image
GL_RGB, //GL_RGB, because pixels are stored in RGB format
GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
//as unsigned numbers
tex.px); //The actual pixel data
}
loadBmp() is function to load bitmap file. I tried to search something in the Internet and stackoverflow, but all exmaples where about cubes or spheres, which doesn't help me. How can i put texture on this drawing?
Texture mapping requires to (manually) assign texture coordinates to each vertex. There are some approaches on automatic texture coordinate generation, or getting away without texture coordinates, by giving each face an individual pixel (Disney Animation pioneered the later method for their computer animated films).
Since you didn't specify which program you used to model I'll refer you to some tutorials on texture UV mapping for Blender
http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV/Unwrapping
Please don't tell me you did "code" your gun, because this is wrong, wrong, wrong!
Once you got the texture coordinates you pass them to OpenGL as just another vertex attribute.
I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.
Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);
I have created a texture using
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, CONSENSUS_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
This texture is used in other code and filled with depth. Now I want to copy the depth values to an RGBA texture (doesn't matter which color channel).
How can I do this?
If it needs to be fast, I'd say render an orthograhic quad the size of the texture and use a shader to read from the depth texture and write to the target texture.
If performance doesn't matter that much you can use PBOs (might even be faster depending on your render pipeline but stalls the CPU). Here's an overview on said PBOs
I don't know of any inherent OpenGL method to do this.
I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.