reading BMP BGR values, passing to glTexImage2D - c++

I've written a routine to read all pixel values from inside a BMP file to an array which I want to feed to glTexImage2D of openGL to turn it into a texture. Doing this I realised that the actual format of pixels inside BMP file is BGR and not RGB, so my array contains blue, green, red. As a result my final texture has its blue and red channels swapped comparing to the original bitmap.
This is how I call it:
glTexImage2D(GL_TEXTURE_2D, 0, 3, imgdata->width, imgdata->height, 0, GL_RGB, GL_UNSIGNED_BYTE, imgdata->pixdata);
Any workaround for this situation/wrong argument I'm passing? Only solution I can find right now is manually swap the RB values inside my array using a loop.

Why not change the format of glTexImage2D to GL_BGR.

I set GR_BGR and it is marked as undeclared identifier
Then you're not using a proper library for getting at OpenGL. OpenGL's system-provided headers may or may not contain up-to-date functions and enumerators. So instead, you need to use one of those libraries to get at OpenGL. You should use these instead of GL/gl.h.
Once you're accessing OpenGL properly, the rest is simple. Use GL_RGB as your pixel transfer format.

Related

Difference between glBitmap and glTexImage2D

I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective

"cast" GL_R8 to GL_BGRA

I'm doing some GPGPU programming with OpenGL.
I want to be able to write all my data to one-dimensional textures with the format GL_R8, so that I basically can treat it as an std:array object.
Then during rendering I would like to be able to set how the GPU should read the image, e.g. "cast" it to 1024x1024 BGRA.
Is this possible?
e.g. what I want to be able to do:
gpu::array<uint8_t> data(GL_R8, width*height*4);
gpu::bind(data, GL_TEXTURE0, gpu::format::bgra, width, height);
Then use a buffer texture. There's no rule (that I know of) that says you can't hook the same buffer up to multiple different textures. That would allow one texture to use it with the GL_R8 internal format. And another texture could use it with the GL_RGBA8 format.

How to load OpenGL texture from ARGB NSImage without swizzling?

I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.

Using libpng to "split" an image into segments

I'm trying to use libpng to split an image into different chunks. The reason being I can't load a texture larger than 512x512 on the hardware I'm working on currently. I accomplished this before with a mixture of SDL and SDL_Image. I basically used the srcrect argument in SDL_BlitSurface to copy just a portion of the image which I then converted into a OpenGL texture. Combine that with a simple loop horizontally then vertically I was able to get an array of textures each a max of 512x512. Then it was just a matter of rendering them at the correct position.
Right now, I don't have the luxury of using SDL, so I figured it's possible to just does this directly myself via libpng. Based on some googling I think its just a matter of using png_read_rows to read just which parts I need. But that's where I'm stuck, I'm not exactly sure how to do that.
Also, if you wonder why I don't just split the images in gimp/photoshop/paint or whatever, it's because I don't control them and am downloading them at runtime.
Thanks for the help in advance.
You don't have to mess with extracting the tiles. You can tell OpenGL to just use some portion of the data you give it to initialize the texture. Keyword is glPixelStorei(GL_UNPACK...) parameters. Say your input image has dimensions img.width and img.height and there are 4 bytes to a RGB pixel, i.e. one byte padding for each pixel and your subpicture is defined by subimg.off_x, subimg.off_y, subimg.width, subimg.height. Then you can load it like this:
glPixelStorei(GL_UNPACK_ROW_LENGTH, img.width)
glPixelStorei(GL_UNPACK_SKIP_PIXELS, subimg.off_x)
glPixelStorei(GL_UNPACK_SKIP_ROWS, subimg.off_y)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glTexImage2D(GL_TEXTURE_2D, GL_RGB, 0,
subimg.width, subimg.height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixeldata)

Can I use a grayscale image with the OpenGL glTexImage2D function?

I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.