Luminance values clipped to [0, 1] during texture transfer? - opengl

I am uploading a host-side texture to OpenGL using something like:
GLfloat * values = new [nRows * nCols];
// initialize values
for (int i = 0; i < nRows * nCols; ++i)
{
values[i] = (i % 201 - 100) / 10.0f; // values from -10.0f .. + 10.0f
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, nRows, nCols, GL_LUMINANCE, GL_FLOAT, values);
However, when I read back the texture using glGetTexImage(), it turns out that all values are clipped to the range [0..1].
First, I cannot find where this behavior is documented (I am using the Red Book for OpenGL 2.1).
Second, is it possible to change this behavior and let the values pass unchanged? I want to access the unscaled, unclipped data in an GLSL shader.

I cannot find where this behavior is documented
In the actual specification, it's in the section on Pixel Rectangles, titled Transfer of Pixel Rectangles.
Second, is it possible to change this behavior and let the values pass unchanged?
Yes. If you want to use "unscaled, unclamped" data, you have to use a floating point image format. The format of your texture is defined when you created the storage for it, probably by a call to glTexImage2D. The third parameter of that function defines the format. So use a proper floating-point format instead of an integer one.

Related

using OpenGL to assign color to pixels

I have a NxM dimension of array. The data type is double and their values can range from 0.000001 to 1.0. I want to display them using OpenGL with colors in NxM pixels, e.g. 0.0001 ~ 0.0005 will be red, 0.0005 ~ 0.001 will be light red, like a picture with legend for different ranges.
I thought I should use texture for efficiency, but I do not quite understand how to match the value in the array to the texture. Do I first need to define a texture like a legend? How will the value in the array use the color in the texture?
Or should I first create a color lookup table and use glDrawPixels? How to define the color table in this case?
Following the approach posted by #Josef Rissling, I defined a legend, then each pixel gets an index in the legend position. I currently use glDrawPixels(). I suppose each legend position contains R, G, B value. How should I set the glPixelTransfer and glPixelMap()? The code I pasted below give me just a black screen.
GLuint legend_image[1024][3]; // it contains { {0,0,255}, {0,0,254}, ...}
// GL initialization;
glutInit(&c, &argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(width_, height_);
glutCreateWindow("GPU render");
// allocate buffer handle
glGenBuffers(1, &buffer_obj_);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, buffer_obj_);
// allocate GPU memory
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, width_ * height_, NULL, GL_DYNAMIC_DRAW_ARB);
// request a CUDA C name for this buffer
CUDA_CALL(cudaGraphicsGLRegisterBuffer(&res_, buffer_obj_, cudaGraphicsMapFlagsNone));
glPixelTransferi(GL_MAP_COLOR, true);
glPixelMapuiv(GL_PIXEL_MAP_I_TO_I, 1024, legend_image[0]);
glutDisplayFunc(draw_func);
glutIdleFunc(idle_func);
glutMainLoop();
void idle_func()
{
// cuda kernel to do calculation, and then convert to pixel legend position which is pointed by dev_ptr.
cudaGraphicsMapResources(1, &res_, 0);
unsigned int* dev_ptr;
size_t size;
cudaGraphicsResourceGetMappedPointer((void**)&dev_ptr, &size, res_);
cuda_kernel(dev_ptr);
cudaGraphicsUnmapResources(1, &res_, 0);
glutPostRedisplay();
}
void draw_func()
{
glDrawPixels(width_, height_, GL_COLOR_INDEX, GL_UNSIGNED_INT, 0);
glutSwapBuffers();
}
// some cleanup code...
You should mention which language and which openGL version you are using...
The effiency depends on what kind of function you use for the mapping, texture lookups are not cheap. Especially if you are not using a texture as array already (than you have to copy the data first).
But for your mapping example:
You can create a legend texture (in which you applied your non-linear color space) that will allow you to map from your value range to color by pixel offset (where the mapped color value lies). The general case would be than for a pseudo shader:
map(value)
{
pixelStartPosition, pixelEndPosition;
pixelRange = pixelEndPosition - pixelStartPosition;
valueNormalizer = 1.0 / (valueMaximum - valueMinimum);
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer);
return pixelLegendPosition;
}
Say you have a legend texture with 2000 pixels in the range from 0 to 1999 with a value from 0 to 1:
pixelStartPosition=0
pixelEndPosition=1999
pixelRange = pixelEndPosition - pixelStartPosition // 1999
valueNormalizer = 1.0 / (valueMaximum - valueMinimum) // 1.0
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer)
// 0 + 1999 * ( (value-0) * 1 ) ===> 1999 * value
If you need to transmit the array data to a texture, there are several ways to do so - but it depends on your version/language mainly, but glTexImage2D is a good direction.

How to use a .raw file in opengl

I'm trying to read a .raw image format and do some modifications on it in OpenGL. I can read the image like this:
int width, height;
BYTE * data;
FILE * file;
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
width = 256;
height = 256;
data = malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
But i dont know how to use glDrawPixels to draw the picture.
My second problem is that I dont know how can I access each pixel. I mean in a .raw image format, each pixel should have 3 integers for storing RGB values(Am I right?). How can I access these RGB values directly?
There's no such thing as a .raw in the hard and fast sense. The name implies image data with no header but doesn't specify the format of the data. RGB is likely but so is RGBA and it's trivial to think of almost endless other possibilities.
Assuming RGB ordering, one byte per channel, then: each pixel is three bytes wide. So the nth pixel is:
r = data[n*3 + 0]
g = data[n*3 + 1]
b = data[n*3 + 2]
Assuming the data is set out so that the pixels are stored in left-to-right order, line by line, then on the first line the pixel at x=3 is at n=3, on the second it's at n=(width of first line)+3, on the third it's at n=(combined width of first two lines)+3, etc.
So:
r = data[(x + y*width)*3 + 0]
g = data[(x + y*width)*3 + 1]
b = data[(x + y*width)*3 + 2]
To use glDrawPixels just follow what the manual tells you to specify as the parameters. It says:
void glDrawPixels( GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
const GLvoid * data);
You say that width and height are 256. You've said that the format is RGB. Scan down the documentation and you'll see that the corresponding GLenum is GL_RGB. You're saying each channel is a single byte in size. So that's GL_UNSIGNED_BYTE. You've loaded the data to data. So:
glDrawPixels(256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
Further comments: obviously get this working first so you've something to build on but glDrawPixels is almost unused in practice. As a result it isn't even part of OpenGL ES or, correspondingly, WebGL. Look at the semantics of the thing. You supply your buffer every time you call. OpenGL can't know whether it has been modified since the last call. So every call transfers your data from CPU to GPU. Look into submitting your data once as a texture and drawing using geometry. That'll save the per-call transfer cost and therefore be a lot more efficient.

32bit (int) Buffer to Greyscale/Colour-mapped Image in OpenGL, Single Channel 32 bit Texture or TBO?

I have an int buffer of intensity values, I want to display this as a greyscale/colour-mapped image in OpenGL.
What is the best way to achieve this?
Standard Texture?
Can I do it via a standard glTexture, so something like:
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_R32f, width, height, 0, OpenGL.GL_RED_INTEGER, OpenGL.GL_UNSIGNED_INT, pixels);
In the shader I am under the impression I would use it the same as any other texture except I would use usampler2D instead of sampler2D, at which point I would get the true integer value (i.e. not 0-1 range).
TBO?
Or would it be better to achieve with a TBO and do something like:
gl.TexBuffer(OpenGL.GL_TEXTURE_BUFFER, OpenGL.GL_R32F, bufferID);
In terms of the shader I am actually quite confused. I have seen things like g = texelFetch(u_tbo_tex, offset + 1).r.. So I am guessing I would have to translate the texture coordinates into an offset, something like:
int offset = tex_coord.s + (tex_coord.t * imageWidth);
but then texelFetch actually returns a vec4, so presumably I would use:
int intensity = texelFetch( buffer, offset).r
But then as tex_coord.s & t are in 0-1, that would imply the need to:
int offset = tex_coord.s*imageHeight + ((tex_coord.t * imageWidth) * imageWidth);
Other Buffer
I have very little experience with buffer objects I feel like really all I am doing is using a buffer in GL....so I do feel like I am over complicating it and I am missing the "penny drop".
Important Notes
Why Int? : In some cases I do some manipulation on the data before turning into a colour and would prefer to do this at 32 bit precision to avoid potential precision errors. Arguably it might not make a difference as it eventually becomes a screen color...
Data update frequency: the intensity data is updated occasionally by user events but certainly not multiple times per frame (so I am presuming STATIC is more appropriate then DYNAMIC in this case?)
Use: The data is mainly for GL so _DRAW There is the possibility that the application could make use of GL to compute some values for it but I would probably create a separate READ buffer in this case
The highest integer value I have seen so far is "90,000" so I know it goes out of the 16 bit integer range.
Note: I am doing this through SharpGL and I have been unable to test at the moment as it has no definition for GL_R32f, so I shall have to find the gl.h on my windows platform (always fun) and add the correct const number*
You can use a normal texture with integer/unsigned integer format:
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_R32UI, width, height, 0, OpenGL.GL_RED_INTEGER, OpenGL.GL_UNSIGNED_INT, pixels);
In the shader you can use a usampler2D, since the texture function has an overload for this you directly get the integer values:
uniform usampler myUTexture;
uint value = texture(myUTexture, texCoord).r;
Edit:
Just for completness: texelFetch has also an overload for all types of 2d-sampler. The difference between texture and texelFetch is the coordinate system used ([0,1] for texture and pixel coordinates for texelFetch) and that texelFetch does not take any interpolation/mipmap into account.

GLbyte Data in Strange Format -- NPR Technique

I'm working on an edge detection algorithm for a NPR technique. I plan on just using difference of gaussians to find the edges.
I thought that I would take a copy of the current screen, then analyze and recolor the pixels so that I have a map to draw the edges with.
This is my screen copy logic so far:
int width = rd->width();
int height = rd->height();
GLbyte * data = (GLbyte *)malloc( width * height * 3 );
if( data ) {
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
}
float color = 0;
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
color = data[i*width+j];
}
}
Seeing as I'm just grabbing everything, I didn't think that the alpha component was necessary to copy. rd is my render device, and data is being output like this:
2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Wy2Vy2Vy2Vy2Vx2Vx2Vx2Vx2Vx2Vx2Vx2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy2Vy3Vy3Vy3Vy3Vy3Vy2Vy2Vy1Vy1Uy0Uy1Vy1Vy1Vy1Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy1Vy1Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx/Tx/Tw/Tw/Tx/Tx0Tx0Tx0Tx/Tx/Tw.Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw-Tw.Tw.Tw.Tw.Tw/Tw/Tw/Tw/Tx/Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Tx0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Ux0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vz0Vy0Vy0Vy0Vy0Vy0Vy0Vy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Uy0Vy1Vy1Vy2Vy2Vz2Vz3Wz3Wz3Vz3Vz3Vz3Vz...
And I have no idea how to handle that. I tried reading a value as shown below with the float color but that didn't really help me, as I don't really know what it means. Is each color I'm reading an intensity value of the pixel, or do I need to read three data points in a row to get all the channels?
What is a good way to get the data displayed on the screen, modify it, and redraw it?
You are telling glReadPixels that you want to read RGB values in 3 BYTES and you are putting it in a single float value. This cannot work.
Try the following instead:
unsigned char color[3];
for ...
color[0] = data[3*(i*width+j)];
color[1] = data[3*(i*width+j)+1];
color[2] = data[3*(i*width+j)+2];
I haven't tried it so there might be some mistakes. But you get the idea.
You could also tell glReadPixels that you only want GL_RED in GL_FLOAT and put it in a float buffer if you are processing black and white images and only want the intensity. Or GL_LUMINANCE; it's really up to you but you need to be coherent between the parameters you pass to glReadPixels and the way you parse that data.

OpenGL convert RGBA byte pixels to image integers array

I read pixels from OpenGL texture 2D into byte array (unsigned char) as it is usually done.But now I need to convert it into image array (of Integers I suppose) to have the layout and pixel range of the images loaded from CPU for reverse process.
My question is - is it enough just to to do :
glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_UNSIGNED_INT,bytes);
instead of :
glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_UNSIGNED_BYTE,bytes);
and then iterate over each integer and convert it from 0-1 range to 0-255?
I haven't really found any example doing such a conversion without using 3 party image libs.
If I do this :
size_t lenght=_viewWidth * _viewHeight ;
GLubyte *bytes=(GLubyte*)malloc(lenght);
/////////////// read pixels from tex /////////////////////
glBindTexture(GL_TEXTURE_2D,tex);
glGetTexImage(GL_TEXTURE_2D,0,GL_BGR,GL_UNSIGNED_BYTE,bytes);
uint8_t Rc, Gc, Bc;
for(x = 0; x < lenght; x+=3)
{
Bc = *bytes + x;
Gc = *bytes + x + 1;
Rc = *bytes + x + 2;
}
Is Rc , Gc and Bc going to be in the 0-255 range ?
When OpenGL loads a texture, it will convert the incoming pixels into the format provided as the internal format as specified in the glTexImage*() call. This operation may include a mapping step from the pixel format (glTexImage*()'s third parameter) to the internal format, and often includes mapping into one of the ranges [0,1] or [-1,1], and then onto the range for the internal format for each component. For example, a pixel format of GL_FLOAT, and an internal format of GL_RGBA8, will cause the input values to be mapped from the range [0,1] into the range [0,255].
When you retrieve the texels using glGetTexImage(), the process is done in reverse, and so the output pixel values (per component) will be in the range of the specified output type (e.g., GL_UNSIGNED_INT in your case). The range for unsigned ints is [0,232-1], so that will be the range of values returned in your integer image array. If you need those values in a different range (e.g., GL_UNSIGNED_BYTES), then you would need to manually convert values into the range you need.
Personally, if one of the data types OpenGL can return matches the range of values you need, try to use that type.