"cast" GL_R8 to GL_BGRA - c++

I'm doing some GPGPU programming with OpenGL.
I want to be able to write all my data to one-dimensional textures with the format GL_R8, so that I basically can treat it as an std:array object.
Then during rendering I would like to be able to set how the GPU should read the image, e.g. "cast" it to 1024x1024 BGRA.
Is this possible?
e.g. what I want to be able to do:
gpu::array<uint8_t> data(GL_R8, width*height*4);
gpu::bind(data, GL_TEXTURE0, gpu::format::bgra, width, height);

Then use a buffer texture. There's no rule (that I know of) that says you can't hook the same buffer up to multiple different textures. That would allow one texture to use it with the GL_R8 internal format. And another texture could use it with the GL_RGBA8 format.

Related

Accessing RGBA components C-Style

Is there any way to access the texture data in GLSL in a C-like fashion?
By that I mean, if I have fragment.rgba, can I, within the shader, cast fragment.rg to a short and use it that way? I want to encode two pieces of scene information (two shorts) into a texture for use by another shader.
Edit: I'm declaring my texture this way:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth,mHeight, 0, GL_RGBA,GL_UNSIGNED_BYTE, mBits)
...and what I want to do in the shader would be done this way in C++:
short* aShortPtr=&fragment.rg;
*aShortPtr=10000;
aShortPtr=&fragment.ba
*aShortPtr=2500;
Now I realize that I can't actually do it this way in the shader, but the RGBA data is stored as an integer anyway, so how can I write and read that integer instead of accessing everything as a vec4?

sRGB FBO render to texture

In my renderer, I produce an anti-aliased scene on a multisampled FBO, which is blitted to an FBO whose color attachment is a texture. The texture is then read during rendering to the framebuffer.
I'd like to update it so that I get gamma-correct results. The benefit of using an sRGB framebuffer is that it allows me to have a somewhat better color precision by storing nonlinear sRGB values directly in the framebuffer.
What I'm not sure about is what changes should I be making to get this, and what is being changed by the different settings.
It looks like extension ARB_framebuffer_sRGB is just dealing with reading and blending operations with sRGB framebuffers. In my situation I'll need to use a texture specifying a sRGB representation type, which means I'd be using extension EXT_texture_sRGB... using a linear texture format would disable the sRGB translation.
Edit: But I just saw this:
3) Should the ability to support sRGB framebuffer update and blending
be an attribute of the framebuffer?
RESOLVED: Yes. It should be a capability of some pixel formats
(mostly likely just RGB8 and RGBA8) that says sRGB blending can
be enabled.
This allows an implementation to simply mark the existing RGB8
and RGBA8 pixel formats as supporting sRGB blending and then
just provide the functionality for sRGB update and blending for
such formats.
Now I'm not so sure what to specify for my texture's pixel format.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers. Is it possible to use glRenderbufferStorageMultisample with a sRGB format, so I can get sRGB storage enabled blending?
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
What I'm not sure about is what changes should I be making to get this
That's because your question seems unsure about what you're trying to do.
The key to all of this stuff is to at all times know what your input data is and what your output data is.
Your first step is to know what is stored in each of your textures. Does a particular texture store linear data or data in the sRGB colorspace? If it stores linear data, then use one of the linear image formats. If it stores sRGB colorspace data, then use one of the sRGB image formats.
This ensures that you are fetching the data you want in your shaders. When it comes time to write/blend them to the framebuffer, you now need to decide how to handle that.
Your screen expects values that have been pre-gamma corrected to the gamma of the display device. As such, if you provide linear values, you will get incorrect color output.
However, sometimes, you want to write to intermediate values. For example, if you're doing forward or deferred rendering, you will write accumulated lighting to a floating-point buffer, then use HDR tone mapping to boil it down to a [0, 1] image for display. Post-processing techniques can again be used. Only the outputs to [0, 1] need to be to images in the sRGB colorspace.
When writing linear RGB values that you want converted into sRGB, you must enable GL_FRAMEBUFFER_SRGB. This is a special enable (note that textures don't have a way to turn off sRGB decoding) because sometimes, you want to write values that already are in sRGB. This is often the case for GUI interface widgets, which were designed and built using colors already in the sRGB colorspace.
I cover issues relating to writing gamma-correct values and reading them from textures in my tutorial series. The first one explains why gamma is important and does explicitly gamma correction in the shader. The second link covers how to use sRGB images, both in textures and framebuffers.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers.
And why would it? ARB_framebuffer_sRGB is only interested in the framebuffer and the nature of images in it. It neither knows nor cares where those images come from. It doesn't care if it's talking about the default framebuffer, a texture attached to an FBO, a renderbuffer attached to an FBO, or something entirely new someone comes up with tomorrow.
The extension states what happens when the destination image is in the sRGB colorspace and when GL_FRAMEBUFFER_SRGB is enabled. Where that image comes from is up to you.
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
One is sized. The other is not. In theory, GL_SRGB_ALPHA could give you any bitdepth the implementation wanted. It could give you 2 bits per component. You're giving the implementation freedom to pick what it wants.
In practice, I doubt you'll find a difference. That being said, always used sized internal formats whenever possible. It's good to be specific about what you want, and to prevent the implementation from doing something stupid. OpenGL even has some sized formats which are required to be supported explicitly as stated.

Opengl float buffer

I'm writing my first ray tracer. I want to make it work in real-time.
I want to use opengl for display.
I want to write my screen to floating point buffer and display the buffer.
What extension and/or buffer type I need?
Thanks in advance!
I'm writing my first ray tracer. I want to make it work in real-time.
Ambitious!
I want to use opengl for display. I want to write my screen to floating point buffer and display the buffer.
OpenGL can read from float buffers directly, e.g.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_FLOAT, data);
But OpenGL may choose any internal format that matches your selection. GL_RGB internal format can be anything that can somehow store RGB data. You can be specific about what you want. For example GL_RGB16 tells OpenGL you want 16 bits resolution per channel. The implementation may choose to use 24 bits per channel, as this allows for 16 bit to be stored. But ultimately the implementation decides, which internal format it will be, based on the constraints you put upon it.
Floating point framebuffers and textures are supported in OpenGL through extensions GL_ARB_texture_float, GLX_ARB_fbconfig_float, WGL_ARB_fbconfig_float, but due to patent issues not all OpenGL implementations implement it (ATI and NVidia do).

How to load OpenGL texture from ARGB NSImage without swizzling?

I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.

Equiv of glDrawpixels that operates on GPU memory?

glDrawPixels(GLsizei width, GLsizei height, GLenum format, GLenum type, const ovid *pixels);
Is there a function like this, except instead of accessing CPU memory, it accesses GPU memory? [Either a texture of a frame buffer object]
Let's cover all the bases here.
First, a direct answer: yes, there is such a function. It's called glDrawPixels. I'm not kidding.
glDrawPixels can certainly read from GPU memory, provided that you are using buffer objects as their source data (commonly called "pixel buffer objects"). glDrawPixels can use pixel buffer objects as the source for pixel data. Buffer objects are (theoretically, at least) in GPU memory, so they qualify.
However, you add onto this "Either a texture of a frame buffer object". Under this qualification, you're asking, "is there a way to copy pixel data from one texture/framebuffer to the current framebuffer?"
Yes. glBlitFramebuffer can do that. It blits from the GL_READ_FRAMEBUFFER to the GL_DRAW_FRAMEBUFFER. And since you can add images from textures to FBOs, you can copy from images just fine. You can even copy from the default framebuffer to some renderbuffer or texture.
You can also employ glCopyImageSubData, which copies pixel rectangles from one image to another. It's a lot more convenient than glBlitFramebuffer if all you're doing is copying pixel data. This is quite new at present (GL 4.3, or ARB_copy_image). It cannot be used to copy data to the default framebuffer.
If it is in a texture:
set up orthographic frustum
disable blending, depth test, etc.
bind texture
draw screen-aligned textured quad with correct texture coordinates
I use this in for example in Compositor::_drawPixels
glDrawPixels can read from a Buffer Object. Just do a
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, XXX)
before calling glDrawPixels.
Caveat: glDrawPixels is deprecated...
Use glBlitFramebuffer, which operates on frambuffer objects (Link). Ans this is not deprecated.
You can take advantage of format conversion, scaling and multisampling.