I have created a texture and filled it with ones:
size_t size = width * height * 4;
float *pixels = new float[size];
for (size_t i = 0; i < size; ++i) {
pixels[i] = 1.0f;
}
glTextureStorage2D(texture_id, 1, GL_RGBA16F, width,
height);
glTextureSubImage2D(texture_id, 0, 0, 0, width, height, GL_RGBA,
GL_FLOAT, pixels);
I use linear filtering (GL_LINEAR) and clamp to border.
But when I draw the image:
color = texture(atlas, uv);
the last row looks like it has alpha values of less than 1. If in the shader I set the alpha to 1:
color.a = 1.0f;
it draws it correctly. What could be the reason for this?
The problem comes from the combination of GL_LINEAR and GL_CLAMP_TO_BORDER:
Clamp to border means that every texture coordinate outside of [0, 1]
will return the border color. This color can be set with
glTexParameterf(..., GL_TEXTURE_BORDER_COLOR, ...) and is black by
default.
Linear filter will take into account pixels that are adjacent to the
sampling location (unless sampling happens exactly at texel centers1),
and will thus also read border color texels (which are here black).
If you don't want this behavior, the simplest solution would be to use GL_CLAMP_TO_EDGE instead which will repeat the last row/column of texels to infinity. The different wrapping modes are explained very well explained at open.gl.
1) Sampling happens most probably not at pixel centers as explained in this answer.
Related
For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.
I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.
I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.
I have a set of X,Y,Z values on a regular spaced grid from which I need to create a color-filled contour plot using C++. I've been googling on this for days and the consensus appears to be that this is achievable using a 1D texture map in openGL. However I have not found a single example of how to actually do this and I'm not getting anywhere just reading the openGL documentation. My confusion comes down to one core question:
My data does not contain an X,Y value for every pixel - it's a regularly spaced grid with data every 4 units on the X and Y axis, with a positive integer Z value.
For example: (0, 0, 1), (4, 0, 1), (8, 0, 2), (0, 4, 2), (0, 8, 4), (4, 4, 3), etc.
Since the contours would be based on the Z value and there are gaps between data points, how does applying a 1D texture achieve contouring this data (i.e. how does applying a 1D texture interpolate between grid points?)
The closest I've come to finding an example of this is in the online version of the Redbook (http://fly.cc.fer.hr/~unreal/theredbook/chapter09.html) in the teapot example but I'm assuming that teapot model has data for every pixel and therefore no interpolation between data points is needed.
If anyone can shed light on my question or better yet point to a concrete example of working with a 1D texture map in this way I'd be forever grateful as I've burned 2 days on this project with little to show for it.
EDIT:
The following code is what I'm using and while it does display the points in the correct location there is no interpolation or contouring happening - the points are just displayed as, well, points.
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j < 2 ? 0 : 255;
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
glDisable(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR );
float s[4] = { 0,1,0,0 };
glTexGenfv( GL_S, GL_OBJECT_PLANE, s );
glEnable( GL_TEXTURE_GEN_S );
glEnable( GL_TEXTURE_1D );
glBegin(GL_POINTS);
//_coords contains X,Y,Z data - Z is the value that I'm trying to contour
for (int x = 0; x < _coords.size(); ++x)
{
glTexCoord1f(static_cast<ValueCoord*>(_coords[x])->GetValue());
glVertex3f(_coords[x]->GetX(), _coords[x]->GetY(), zIndex);
}
glEnd();
The idea is using the Z coordinate as S coordinate into the texture. The linear interpolation over the texture coordinate then creates the contour. Note that by using a shader you can put the XY->Z data into a 2D texture and use a shader to do a indirection of the value of the 2D sampler in the color ramp of the 1D texture.
Update: Code example
First we need to change the way you use textures a bit.
To this to prepare the texture:
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j*255/32; // use a gradient instead of a line
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
GLuint texID;
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_1D, texID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
// We want the texture to wrap, so that values outside the range [0, 1]
// are mapped into a gradient sawtooth
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
And this to bind it for usage.
// The texture coordinate comes from the data, it it not
// generated from the vertex position!!!
glDisable( GL_TEXTURE_GEN_S );
glDisable(GL_TEXTURE_2D);
glEnable( GL_TEXTURE_1D );
glBindTexture(GL_TEXTURE_1D, texID);
Now to your conceptual problem: You cannot directly make a contour plot from XYZ data. XYZ are just sparse sampling points. You need to fill the gaps, for example by putting it into a 2D histogram first. For this create a grid with a certain amount of bins in each direction, initialized to all NaN (pseudocode)
float hist2D[bins_x][bins_y] = {NaN, NaN, ...}
then for each XYZ, add the Z value to the bins of the grid if not a NaN, otherwise replace NaN with the Z value. Afterwards use a Laplace filter on the histogram to smooth out the bins still containing a NaN. Finally you can render the grid as contour plot using
glBegin(GL_QUADS);
for(int y=0; y<grid_height; y+=2) for(int x=0; x<grid_width; x+=2) {
glTexCoord1f(hist2D[x ][y ]]); glVertex2i(x ,y);
glTexCoord1f(hist2D[x+1][y ]]); glVertex2i(x+1,y);
glTexCoord1f(hist2D[x+1][y+1]]); glVertex2i(x+1,y+1);
glTexCoord1f(hist2D[x ][y+1]]); glVertex2i(x ,y+1);
}
glEnd();
or you could upload the grid as a 2D texture and use a fragment shader to indirect into the color ramp.
Another way to fill the gaps in sparse XYZ data is to find the 2D Voronoi diagram of the XY set and use this to create the sampling geometry. The Z values for the vertices would be the distance weighted average of the XYZs contributing to the Voronoi cells intersecting.
I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25