I have a NxM dimension of array. The data type is double and their values can range from 0.000001 to 1.0. I want to display them using OpenGL with colors in NxM pixels, e.g. 0.0001 ~ 0.0005 will be red, 0.0005 ~ 0.001 will be light red, like a picture with legend for different ranges.
I thought I should use texture for efficiency, but I do not quite understand how to match the value in the array to the texture. Do I first need to define a texture like a legend? How will the value in the array use the color in the texture?
Or should I first create a color lookup table and use glDrawPixels? How to define the color table in this case?
Following the approach posted by #Josef Rissling, I defined a legend, then each pixel gets an index in the legend position. I currently use glDrawPixels(). I suppose each legend position contains R, G, B value. How should I set the glPixelTransfer and glPixelMap()? The code I pasted below give me just a black screen.
GLuint legend_image[1024][3]; // it contains { {0,0,255}, {0,0,254}, ...}
// GL initialization;
glutInit(&c, &argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(width_, height_);
glutCreateWindow("GPU render");
// allocate buffer handle
glGenBuffers(1, &buffer_obj_);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, buffer_obj_);
// allocate GPU memory
glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, width_ * height_, NULL, GL_DYNAMIC_DRAW_ARB);
// request a CUDA C name for this buffer
CUDA_CALL(cudaGraphicsGLRegisterBuffer(&res_, buffer_obj_, cudaGraphicsMapFlagsNone));
glPixelTransferi(GL_MAP_COLOR, true);
glPixelMapuiv(GL_PIXEL_MAP_I_TO_I, 1024, legend_image[0]);
glutDisplayFunc(draw_func);
glutIdleFunc(idle_func);
glutMainLoop();
void idle_func()
{
// cuda kernel to do calculation, and then convert to pixel legend position which is pointed by dev_ptr.
cudaGraphicsMapResources(1, &res_, 0);
unsigned int* dev_ptr;
size_t size;
cudaGraphicsResourceGetMappedPointer((void**)&dev_ptr, &size, res_);
cuda_kernel(dev_ptr);
cudaGraphicsUnmapResources(1, &res_, 0);
glutPostRedisplay();
}
void draw_func()
{
glDrawPixels(width_, height_, GL_COLOR_INDEX, GL_UNSIGNED_INT, 0);
glutSwapBuffers();
}
// some cleanup code...
You should mention which language and which openGL version you are using...
The effiency depends on what kind of function you use for the mapping, texture lookups are not cheap. Especially if you are not using a texture as array already (than you have to copy the data first).
But for your mapping example:
You can create a legend texture (in which you applied your non-linear color space) that will allow you to map from your value range to color by pixel offset (where the mapped color value lies). The general case would be than for a pseudo shader:
map(value)
{
pixelStartPosition, pixelEndPosition;
pixelRange = pixelEndPosition - pixelStartPosition;
valueNormalizer = 1.0 / (valueMaximum - valueMinimum);
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer);
return pixelLegendPosition;
}
Say you have a legend texture with 2000 pixels in the range from 0 to 1999 with a value from 0 to 1:
pixelStartPosition=0
pixelEndPosition=1999
pixelRange = pixelEndPosition - pixelStartPosition // 1999
valueNormalizer = 1.0 / (valueMaximum - valueMinimum) // 1.0
pixelLegendPosition = pixelStartPosition + pixelRange * ( (value-valueMinimum) * valueNormalizer)
// 0 + 1999 * ( (value-0) * 1 ) ===> 1999 * value
If you need to transmit the array data to a texture, there are several ways to do so - but it depends on your version/language mainly, but glTexImage2D is a good direction.
Related
I am trying to create a texture2d in d3d11 from std::vector data. This texture is going to be used as the variable rate shading surface texture. For testing purposes the texture is filled with 11s as values. There is a lookup table that maps each 16x16 pixel block to a shading rate. In this case the value 11 results in a 4x4 coarse shading.
But it looks like there are values sampled along the x axis that are not equal to 11 but instead 0. The Vertical sampling looks correct.
D3D11_TEXTURE2D_DESC srsDesc;
ZeroMemory(&srsDesc, sizeof(srsDesc));
srsDesc.Width = (UINT)g_variableShadingGranularity.x;
srsDesc.Height = (UINT)g_variableShadingGranularity.y;
srsDesc.ArraySize = (UINT)1;
srsDesc.MipLevels = (UINT)1;
srsDesc.SampleDesc.Count = (UINT)1;
srsDesc.SampleDesc.Quality = (UINT)0;
srsDesc.Format = DXGI_FORMAT_R8_UINT;
srsDesc.Usage = D3D11_USAGE_DEFAULT;
srsDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
srsDesc.CPUAccessFlags = (UINT)0;
srsDesc.MiscFlags = (UINT)0;
// fill texture
g_srs_texture_data = std::vector<unsigned int>(static_cast<std::size_t>(srsDesc.Width) * srsDesc.Height, 11);
D3D11_SUBRESOURCE_DATA sd = {};
const uint32_t rowPitch = srsDesc.Width * sizeof(unsigned int);
sd.pSysMem = static_cast<const void*>(g_srs_texture_data.data());
sd.SysMemPitch = rowPitch;
HRESULT ok = s_device->CreateTexture2D(&srsDesc, &sd, &g_shadingRateSurface);
To be more precise after every 16x16 pixel block there are 3 black blocks along the x axis. The black blocks indicate that the texture mapped to a "CULL" shading rate for that pixel block. In the lookup table the "CULL" shading rate defined for the value 0. If I change the lookup table such that 0 maps for example to a 4x4 shading rate then the image is rendered completely with 4x4 coarse shading and no black bars appear. This means that there must be 0s sampled from that texture as values even though the texture should only contain 11s.
Any ideas what could be causing this?
I figured out the solution. The data I provide has to be of type unsigned char. I think thats because the texture data is specified as DXGI_FORMAT_R8_UINT which is 1 byte in size whereas unsigned int is 4 bytes in size. The UINT made me actually think its a c++ unsigned int type.
I'm currently trying to setup a 2D sprite animation with OpenGL 4.
For example, I've designed a ball smoothly rotating with Gimp. There are about 32 frames ( 8 frames on 4 rows).
I aim to create a sprite atlas within a 2D texture and store my sprite data in buffers (VBO). My sprite rectangle would be always the same ( i.e. rect(0,0,32,32) ) but my texture coordinates will change each time the frame index is incremented.
I wonder how to modify the coordinates.
As the sprite tiles are stored on several rows if appears to be difficult to manage it in the shader.
Modify the sprite texture coordinate within the buffer using glBufferSubData() ?
I spent a lot of time with OpenGL 1.x....and I get back to OpenGL few months ago and I realized many things changed though. I will try several options though, but your suggestions and experience are welcome.
As the sprite tiles are stored on several rows if appears to be
difficult to manage it in the shader.
Not really, all your sprites are the same size, so you get a perfect uniform grid, and going from some 1D index to 2D is just a matter of division and modulo. Not really hard.
However, why do you even store the single frames in an mxn grid? Now you could store them just in one row. However, in modern GL, we have array textures. These are basically a set of independent 2D layers, all of the same size. You just access them by a 3D coordinate, with the third coordinate being the layer from o to n-1. This is ideally suited for your use case, and will eliminate any issues of texture filtering/bleeding at the borders, and it also will work well with mipmapping (if you need that). When array textures were introduced, the minumim number of layers an implementation is required to support was 64 (it is much higher nowadays), so 32 frames will be a piece of cake even for old GPUs.
You could do this a million ways but I'm going to propose a naive solution:
Create a VBO with 32(frame squares)*2(triangles per frame square)*3(triangle vertices)*5(x,y,z, u,v per vertex) = 960 floats of space. Fill it in with the vertices of all your sprites in a 2 triangler-per frame fashion.
Now according to the docs of glDrawArrays, you can specify where you start and how long you render for. Using this you can specify the following:
int indicesPerFrame = 960/32;
int indexToStart = indicesPerFrame*currentBallFrame;
glDrawArrays( GL_TRIANGLES, indexToStart, indicesPerFrame);
No need to modify the VBO. Now from my point of view, this is overkill to just render 32 frames 1 frame at a time. There are better solutions to this problem but this is the simplest for learning OpenGL4.
In OpenGL 2.1, I'm using your 2nd option:
void setActiveRegion(int regionIndex)
{
UVs.clear();
int numberOfRegions = (int) textureSize / spriteWidth;
float uv_x = (regionIndex % numberOfRegions)/numberOfRegions;
float uv_y = (regionIndex / numberOfRegions)/numberOfRegions;
glm::vec2 uv_up_left = glm::vec2( uv_x , uv_y );
glm::vec2 uv_up_right = glm::vec2( uv_x+1.0f/numberOfRegions, uv_y );
glm::vec2 uv_down_right = glm::vec2( uv_x+1.0f/numberOfRegions, (uv_y + 1.0f/numberOfRegions) );
glm::vec2 uv_down_left = glm::vec2( uv_x , (uv_y + 1.0f/numberOfRegions) );
UVs.push_back(uv_up_left );
UVs.push_back(uv_down_left );
UVs.push_back(uv_up_right );
UVs.push_back(uv_down_right);
UVs.push_back(uv_up_right);
UVs.push_back(uv_down_left);
glBindBuffer(GL_ARRAY_BUFFER, uvBuffer);
glBufferSubData(GL_ARRAY_BUFFER, 0, UVs.size() * sizeof(glm::vec2), &UVs[0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
Source: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-11-2d-text/
He implemented it to render 2D Text but it's the same concept!
I hope have helped!
We already have a highly optimized class in our API to read 3D Lut(Nuke format) files and apply the transform to the image. So instead of iterating pixel-by-pixel and converting RGB values to Lab (RGB->XYZ->Lab) values using the complex formulae, I think it would be better if I generated a lookup table for RGB to LAB (or XYZ to LAB) transform. Is this possible?
I understood how the 3D Lut works for transformations from RGB to RGB, but I am confused about RGB to Lab as L, a and b have different ranges. Any hints ?
EDIT:
Can you please explain me how the Lut will work ?
Heres one explanation: link
e.g Below is my understanding for a 3D Lut for RGB->RGB transform:
a sample Nuke 3dl Lut file:
0 64 128 192 256 320 384 448 512 576 640 704 768 832 896 960 1023
R, G, B
0, 0, 0
0, 0, 64
0, 0, 128
0, 0, 192
0, 0, 256
.
.
.
0, 64, 0
0, 64, 64
0, 64, 128
.
.
Here instead of generating a 1024*1024*1024 table for the source 10-bit RGB values, each R,G and B range is quantized to 17 values generating a 4913 row table.
The first line gives the possible quantized values (I think here only the length and the max value matter ). Now suppose, if the source RGB value is (20, 20, 190 ), the output would be line # 4 (0, 0, 192) (using some interpolation techniques). Is that correct?
This one is for 10-bit source, you could generate a smiliar one for 8-bit by changing the range from 0 to 255?
Similarly, how would you proceed for sRGB->Lab conversion ?
An alternative approach makes use of graphics hardware, aka "general purpose GPU computing". There are some different tools for this, e.g. OpenGL GLSL, OpenCL, CUDA, ... You should gain an incredible speedup of about 100x and more compared to a CPU solution.
The most "compatible" solution is to use OpenGL with a special fragment shader with which you can perform computations. This means: upload your input image as a texture to the GPU, render it in a (target) framebuffer with a special shader program which converts your RGB data to Lab (or it can also make use of a lookup table, but most float computations on the GPU are faster than table / texture lookups, so we won't do this here).
First, port your RGB to Lab conversion function to GLSL. It should work on float numbers, so if you used integral values in your original conversion, get rid of them. OpenGL uses "clamp" values, i.e. float values between 0.0 and 1.0. It will look like this:
vec3 rgbToLab(vec3 rgb) {
vec3 lab = ...;
return lab;
}
Then, write the rest of the shader, which will fetch a pixel of the (RGB) texture, calls the conversion function and writes the pixel in the color output variable (don't forget the alpha channel):
uniform sampler2D texture;
varying vec2 texCoord;
void main() {
vec3 rgb = texture2D(texture, texCoord).rgb;
gl_FragColor = vec4(lab, 1.0);
}
The corresponding vertex shader should write texCoord values of (0,0) in the bottom left and (1,1) in the top right of a target quad filling the whole screen (framebuffer).
Finally, use this shader program in your application by rendering on a framebuffer with the same size than your image. Render a quad which fills the whole region (without setting any transformations, just render a quad from the 2D vertices (-1,-1) to (1,1)). Set the uniform value texture to your RGB image which you uploaded as a texture. Then, read back the framebuffer from the device, which should hopefully contain your image in Lab color space.
Assuming your source colorspace is a triplet of bytes (RGB, 8 bits each) and both color spaces are stored in structs with the names SourceColor and TargetColor respectively, and you have a conversion function given like this:
TargetColor convert(SourceColor color) {
return ...
}
Then you can create a table like this:
TargetColor table[256][256][256]; // 16M * sizeof(TargetColor) => put on heap!
for (int r, r < 256; ++r)
for (int g, g < 256; ++g)
for (int b, b < 256; ++b)
table[r][g][b] = convert({r, g, b}); // (construct SourceColor from r,g,b)
Then, for the actual image conversion, use an alternative convert function (I'd suggest that you write a image conversion class which takes a function pointer / std::function in its constructor, so it's easily exchangeable):
TargetColor convertUsingTable(SourceColor source) {
return table[source.r][source.g][source.b];
}
Note that the space consumption is 16M * sizeof(TargetColor) (assuming 32 bit for Lab this will be 64MBytes), so the table should be heap-allocated (it can be stored in-class if your class is going to live on the heap, but better allocate it with new[] in the constructor and store it in a smart pointer).
I am uploading a host-side texture to OpenGL using something like:
GLfloat * values = new [nRows * nCols];
// initialize values
for (int i = 0; i < nRows * nCols; ++i)
{
values[i] = (i % 201 - 100) / 10.0f; // values from -10.0f .. + 10.0f
}
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, nRows, nCols, GL_LUMINANCE, GL_FLOAT, values);
However, when I read back the texture using glGetTexImage(), it turns out that all values are clipped to the range [0..1].
First, I cannot find where this behavior is documented (I am using the Red Book for OpenGL 2.1).
Second, is it possible to change this behavior and let the values pass unchanged? I want to access the unscaled, unclipped data in an GLSL shader.
I cannot find where this behavior is documented
In the actual specification, it's in the section on Pixel Rectangles, titled Transfer of Pixel Rectangles.
Second, is it possible to change this behavior and let the values pass unchanged?
Yes. If you want to use "unscaled, unclamped" data, you have to use a floating point image format. The format of your texture is defined when you created the storage for it, probably by a call to glTexImage2D. The third parameter of that function defines the format. So use a proper floating-point format instead of an integer one.
Imagine the following scenario: you have a set of RPG character spritesheets in PNG format and you want to use them in an OpenGL application.
The separate characters are (usually) 16 by 24 pixels in size (that is, 24 pixels tall) and may be at any width and height without leaving padding. Kinda like this:
(source: kafuka.org)
I already have the code to determine an integer-based clipping rectangle given a frame index and size:
int framesPerRow = sheet.Width / cellWidth;
int framesPerColumn = sheet.Height / cellHeight;
framesTotal = framesPerRow * framesPerColumn;
int left = frameIndex % framesPerRow;
int top = frameIndex / framesPerRow;
//Clipping rect's width and height are obviously cellWidth and cellHeight.
Running this code with frameIndex = 11, cellWidth = 16, cellHeight = 24 would return a cliprect (32, 24)-(48, 48) assuming it's Right/Bottom opposed to Width/Height.
The actual question
Now, given a clipping rectangle and an X/Y coordinate to place the sprite on, how do I draw this in OpenGL? Having the zero coordinate in the top left is preferred.
You have to start thinking in "texture space" where the coordinates are in the range [0, 1].
So if you have a sprite sheet:
class SpriteSheet {
int spriteWidth, spriteHeight;
int texWidth, texHeight;
int tex;
public:
SpriteSheet(int t, int tW, int tH, int sW, int sH)
: tex(t), texWidth(tW), texHeight(tH), spriteWidth(sW), spriteHeight(sH)
{}
void drawSprite(float posX, float posY, int frameIndex);
};
All you have to do is submit both vertices and texture vertices to OpenGL:
void SpriteSheet::drawSprite(float posX, float posY, int frameIndex) {
const float verts[] = {
posX, posY,
posX + spriteWidth, posY,
posX + spriteWidth, posY + spriteHeight,
posX, posY + spriteHeight
};
const float tw = float(spriteWidth) / texWidth;
const float th = float(spriteHeight) / texHeight;
const int numPerRow = texWidth / spriteWidth;
const float tx = (frameIndex % numPerRow) * tw;
const float ty = (frameIndex / numPerRow + 1) * th;
const float texVerts[] = {
tx, ty,
tx + tw, ty,
tx + tw, ty + th,
tx, ty + th
};
// ... Bind the texture, enable the proper arrays
glVertexPointer(2, GL_FLOAT, verts);
glTexCoordPointer(2, GL_FLOAT, texVerts);
glDrawArrays(GL_TRI_STRIP, 0, 4);
}
};
Franks solution is already very good.
Just a (very important) sidenote, since some of the comments suggested otherwise.
Please don't ever use glBegin/glEnd.
Don't ever tell someone to use it.
The only time it is OK to use glBegin/glEnd is in your very first OpenGL program.
Arrays are not much harder to handle, but...
... they are faster.
... they will still work with newer OpenGL versions.
... they will work with GLES.
... loading them from files is much easier.
I'm assuming you're learning OpenGL and only needs to get this to work somehow. If you need raw speed, there's shaders and vertex buffers and all sorts of both neat and complicated things.
The simplest way is to load the PNG into a texture (assuming you have the ability to load images into memory, you do need htat), then draw it with a quad setting appropriate texture coordinates (they go from 0 to 1 with floating point coordinates, so you need to divide by texture width or height accordingly).
Use glBegin(GL_QUADS), glTexcoord2f(), glVertex2f(), glEnd() for the simplest (but not fastest) way to draw this.
For making zero top left, either use gluOrtho() to set up the view matrix differently from normal GL (look up the docs for that function, set top to 0 and bottom to 1 or screen_height if you want integer coords) or just make change your drawing loop and just do glVertex2f(x/screen_width, 1-y/screen_height).
There are better and faster ways to do this, but this is probably one of the easiest if you're learning raw OpenGL from scratch.
A suggestion, if I may. I use SDL to load my textures, so what I did is :
1. I loaded the texture
2. I determined how to separate the spritesheet into separate sprites.
3. I split them into separate surfaces
4. I make a texture for each one (I have a sprite class to manage them).
5. Free the surfaces.
This takes more time (obviously) on loading, but pays of later.
This way it's a lot easier (and faster), as you only have to calculate the index of the texture you want to display, and then display it. Then, you can scale/translate it as you like and call a display list to render it to whatever you want. Or, you could do it in immediate mode, either works :)