I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.
Related
I've created a terrain in OpenGL using a heightmap image. The result is an dynamically allocated array that includes all the vertices of the terrain (x,y,z) with a stride of 3 like this:(firstvertexX,firstvertexY,firstvertexZ,secondvertexX,secondvertexY,thirdvertexZ...).
I also calculated the normals (in a similar array) and an extremely bad approximation of the supposed UVs of the terrain (using the ...normals) in order to render it.The rendering has no problem at all,except that as i said the UV approximation is really bad and the texture that is rendered has a "polygon" feel in it (using GL_TRIANGLE_STRIPS).
I want to clarify that i want the terrain to be created in OpenGl and not in any other programm like Blender. So i have to somehow specify a UV coordinate for each vertex in the array mentioned above that containes the vertices. I would also like to say that im not looking for a perfect UV solution(even if somehow there is one), but a way that does a rough approximation in order for the result to be decent.
So my questions are:
Do the UV coordinates need to range from 0 to 1, with no consideration of the terrain's width and length or not?
Does the UV coordinate array need to be the same length as the vertices array?
Is there any simple algorithm or way that i could specify the UVs of the terrain?
The texture that is rendered:
The actual image from the web:
The code that specifies the creation of the arrays :
for (i = 0; i < terrainGridLength - 1 ; i++) { //z
for (j = 0; j < terrainGridWidth; j++) { //x
TerrainVertices[k] = startW + j + xOffset;
TerrainVertices[k + 1] = 20 * terrainHeights[(i + 1)*terrainGridWidth + (j)] + yOffset;
TerrainVertices[k + 2] = startL - (i + 1) + zOffset;
k = k + 3;
TerrainVertices[k] = startW + j +xOffset;
TerrainVertices[k + 1] = 20 * terrainHeights[(i)*terrainGridWidth + j] +yOffset;
TerrainVertices[k + 2] = startL - i + zOffset;
k = k + 3;
float x = RandomFloat(0.16, 0.96);
TerrainUVs[d] = x* (terrainNormals[ 3*((i+1 )*terrainGridWidth + j)]);
TerrainUVs[d + 1] = terrainNormals[3* ((i+1 )*terrainGridWidth + j) + 2 ];
x = RandomFloat(0.21, 0.46);
TerrainUVs[d+2] = x*(terrainNormals[ 3*((i+1 )*terrainGridWidth + j) +1]);
d = d + 3;
x = RandomFloat(0.3, 0.49);
TerrainUVs[d] = x*(terrainNormals[3* ((i )*terrainGridWidth + j) ]);
TerrainUVs[d + 1] = terrainNormals[ 3*((i )*terrainGridWidth + j)+2 ];
x = RandomFloat(0.4, 0.85);
TerrainUVs[d + 2] = x*(terrainNormals[3* ((i )*terrainGridWidth + j) +1]);
d = d + 3;
}
}
I will try to address all of your questions as best I can.
1) Do the UV coordinates need to range from 0 to 1, with no
consideration of the terrain's width and length or not?
UV coordinates are generally in 0 to 1 space, but if you pass in a UV that is not 0 to 1 the way that UV is handled will depend on how you have set up OpenGL to handle that case.
You can tell the texture sampler to treat the coordinates as wrapping with the following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Alternatively you can clamp the UV values with the following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
See more on texture samplers here:
https://www.khronos.org/opengl/wiki/Sampler_Object
Does the UV coordinate array need to be the same length as the
vertices array?
Generally speaking, yes. When texturing some geometry you will usually want at least one UV for each vertex thus you should have the same number of UVS as you do vertices.
BUT you may have more if you have more than one texture applied to the geometry or you may have less if you are texturing your geometry via some other means, like with a fragment shader you write which does not require UVS.
You can also have less if your UVS are indexed, which is when vertices that share the same UV each refer to the same UV in an array through an index. But if you want to keep things simple just go for 1 to 1 and don't worry about the duplicates.
Is there any simple algorithm or way that i could specify the UVs of
the terrain?
One easy way to do this is just to texture each triangle as part of a quad. You terrain should just be a grid of quads which means that you could just iterate through you list of vertices assigning each quad in your mesh with texture coordinates like these:
Even better, if the grid of your terrain is size 1 on x and z you can just plug in the world space coordinates of your mesh on x and z and set your texture sampler to wrap, assuming your mesh isn't rotated. It will still work if your mesh is rotated, but the texture won't be rotated with it.
I want to write a signed distance interpretation. For that I am creating a voxelgrid 100*100*100 for example (the size will increase if it is working).
Now my plans are to load a point cloud into a 1d texture:
glEnable(GL_TEXTURE_1D);
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_1D, _texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, pc->pc.size(), 0, GL_RGBA, GL_FLOAT, &pc->pc.front());
glBindTexture(GL_TEXTURE_1D, 0);
'pc' is just a class which holds a vector of structure Point, which has only floats x,y,z,w.
Than I want to render the hole 100x100x100 grid, so each voxel and iterate trough all points of that texture, calculate the distance to my current voxel and store that distance in a new texture (1000x1000). For the moment this texture I am creating holds only color valuables which stores the distance in the red and green component and blue is set to 1.0.
So I can see the result on screen.
My problem is now, that when I have about 500 000 points in my point cloud, It seems to stop rendering after a few voxels(less than 50 000). My guess is that if it takes to long, it stops and just trow out the buffer that it has.
I don't know if that can be the case but if it is, is there something I can do against it, or maybe something I can do to make this procedure better/faster.
My second guess is, that there is something I don't consider with the 1D Texture. But is there a better way to pass in a high amount of data? Because I will surely need a few hundred thousand points data.
I don't know if it helps if I show the full fragment shader, so I will only show some parts, which I think is important for that problem:
Distance calculation and iteration through all points:
for(int i = 0; i < points; ++i){
vec4 texInfo = texture(textureImg, getTextCoord(i));
vec4 pos = position;
pos.z /= rows*rows;
vec4 distVector = texInfo-pos;
float dist = sqrt(distVector.x*distVector.x + distVector.y*distVector.y + distVector.z*distVector.z);
if(dist < minDist){
minDist = dist;
}
}
Function getTexCoord:
float getTextCoord(float a)
{
return (a * 2.0f + 1.0f) / (2.0f * points);
}
*Edit:
vec4 newPos = vec4(makeCoord(position.x+Col())-1,
makeCoord(position.y+Row())-1,
0,
1.0);
float makeCoord(float a){
return (a/rows)*2;
}
int Col(){
float a = mod(position.z,rows);
return int(a);
}
int Row()
{
float a = position.z/rows;
return int(a);
}
You absolutely shouldn`t be looping through all of your points in a fragment shader, as it gets calculated N times per frame (where N equals the number of pixels), which effectively gives you O(N2) computational complexity.
All textures have limits on how much data they can hold per dimension. Two most important values here are GL_MAX_TEXTURE_SIZE and GL_MAX_3D_TEXTURE_SIZE. As stated in official docs,
Texture sizes have a limit based on the GL implementation. For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE. For array textures, the maximum array length is GL_MAX_ARRAY_TEXTURE_LAYERS. For 3D textures, no dimension can be greater than GL_MAX_3D_TEXTURE_SIZE in size.
Within these limits, the size of a texture can be any value. It is advised however, that you stick to powers-of-two for texture sizes, unless you have a significant need to use arbitrary sizes.
The most typical values are listed here and here.
If you really have to use large data amounts inside your frag shader, consider a 2D or 3D texture with known power-of-2 dimensions and GL_NEAREST / GL_REPEAT coordinates. This will enable you to compute 2D texture coords just by multiplying the source offset by a precomputed 1/width value (Y coord; the remainder is by definition smaller than 1 texel and can be safely ignored in the presence of GL_NEAREST) and using it as-is for X coord (GL_REPEAT guarantees that only the remainder gets used). Personally I implemented this approach when I needed to pass 128 MB of data to a GLSL 1.20 shader.
If you are targeting a recent enough OpenGL (≥ 3.0), you also can use buffer textures.
And the last, but not the least. You cannot pass integer-precision values greater than 224 through standard IEEE floats.
I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to
Generate a framebuffer using depth component
Generate a cubemap and bind it to said framebuffer
Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*
Draw the scene normally, and compare the depth value of the fragments against those in the cubemap
Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)
My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.
Here is my framebuffer and cube map initialization:
framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
glDrawBuffer(GL_NONE);
Shadow Vertex Shader
void main(){
gl_Position = depthMVP * M* vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment Shader
void main(){
fragmentDepth = distance(lightPos, pos);
}
Vertex Shader (unrelated bits cut out)
uniform mat4 depthMVP;
void main() {
PositionWorldSpace = (M * vec4(position,1.0)).xyz;
gl_Position = MVP * vec4(position, 1.0 );
ShadowCoord = depthMVP * M* vec4(position, 1.0);
}
Fragment Shader (unrelated code cut)
uniform samplerCube shadowMap;
void main(){
float bias = 0.005;
float visibility = 1;
if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
visibility = 0.1
}
Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction
Well they are defined like so:
glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();
The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be.
Then the draw code is done for the sides of the cubemap like so:
for(int loop = 0; loop < 6; loop++){
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
glClear( GL_DEPTH_BUFFER_BIT);
for(auto i: models){
glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
glBindVertexArray(i->vao);
glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
}
}
Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.
Update 1
After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap.
http://imgur.com/JekOMvf
Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.
Update 2
I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).
case 1://Negative X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
break;
case 3://Negative Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
break;
case 5://Negative Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
break;
case 0://Positive X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
break;
case 2://Positive Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
break;
case 4://Positive Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
break;
The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) http://i.imgur.com/HsOSG5v.png
As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).
Update 3
Code, as it currently stands:
Shadow Vertex
void main(){
gl_Position = depthMVP * vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment
void main(){
fragmentDepth = distance(lightPos, pos);
}
Main Vertex
void main(){
PositionWorldSpace = (M*vec4(position, 1)).xyz;
ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}
Main Frag
void main(){
float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float dist = distance(lightPos, PositionWorldSpace);
if(texDist < distance(lightPos, PositionWorldSpace)
visibility = 0.1;
outColor = vec3(texDist);//This is to visualize the depth maps
}
The perspective matrix being used
glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);
Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).
If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.
If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.
Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap
So applying that extra bit of code to your shader, it would work out to something like this:
void main () {
float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float testDepth = VectorToDepthValue(lightPos - PositionWorldSpace);
if (shadowDepth < testDepth)
visibility = 0.1;
}
I have a set of X,Y,Z values on a regular spaced grid from which I need to create a color-filled contour plot using C++. I've been googling on this for days and the consensus appears to be that this is achievable using a 1D texture map in openGL. However I have not found a single example of how to actually do this and I'm not getting anywhere just reading the openGL documentation. My confusion comes down to one core question:
My data does not contain an X,Y value for every pixel - it's a regularly spaced grid with data every 4 units on the X and Y axis, with a positive integer Z value.
For example: (0, 0, 1), (4, 0, 1), (8, 0, 2), (0, 4, 2), (0, 8, 4), (4, 4, 3), etc.
Since the contours would be based on the Z value and there are gaps between data points, how does applying a 1D texture achieve contouring this data (i.e. how does applying a 1D texture interpolate between grid points?)
The closest I've come to finding an example of this is in the online version of the Redbook (http://fly.cc.fer.hr/~unreal/theredbook/chapter09.html) in the teapot example but I'm assuming that teapot model has data for every pixel and therefore no interpolation between data points is needed.
If anyone can shed light on my question or better yet point to a concrete example of working with a 1D texture map in this way I'd be forever grateful as I've burned 2 days on this project with little to show for it.
EDIT:
The following code is what I'm using and while it does display the points in the correct location there is no interpolation or contouring happening - the points are just displayed as, well, points.
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j < 2 ? 0 : 255;
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
glDisable(GL_TEXTURE_2D);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR );
float s[4] = { 0,1,0,0 };
glTexGenfv( GL_S, GL_OBJECT_PLANE, s );
glEnable( GL_TEXTURE_GEN_S );
glEnable( GL_TEXTURE_1D );
glBegin(GL_POINTS);
//_coords contains X,Y,Z data - Z is the value that I'm trying to contour
for (int x = 0; x < _coords.size(); ++x)
{
glTexCoord1f(static_cast<ValueCoord*>(_coords[x])->GetValue());
glVertex3f(_coords[x]->GetX(), _coords[x]->GetY(), zIndex);
}
glEnd();
The idea is using the Z coordinate as S coordinate into the texture. The linear interpolation over the texture coordinate then creates the contour. Note that by using a shader you can put the XY->Z data into a 2D texture and use a shader to do a indirection of the value of the 2D sampler in the color ramp of the 1D texture.
Update: Code example
First we need to change the way you use textures a bit.
To this to prepare the texture:
//Create a 1D image - for this example it's just a red line
int stripeImageWidth = 32;
GLubyte stripeImage[3*stripeImageWidth];
for (int j = 0; j < stripeImageWidth; j++) {
stripeImage[3*j] = j*255/32; // use a gradient instead of a line
stripeImage[3*j+1] = 255;
stripeImage[3*j+2] = 255;
}
GLuint texID;
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_1D, texID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage1D(GL_TEXTURE_1D, 0, 3, stripeImageWidth, 0, GL_RGB, GL_UNSIGNED_BYTE, stripeImage);
// We want the texture to wrap, so that values outside the range [0, 1]
// are mapped into a gradient sawtooth
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
And this to bind it for usage.
// The texture coordinate comes from the data, it it not
// generated from the vertex position!!!
glDisable( GL_TEXTURE_GEN_S );
glDisable(GL_TEXTURE_2D);
glEnable( GL_TEXTURE_1D );
glBindTexture(GL_TEXTURE_1D, texID);
Now to your conceptual problem: You cannot directly make a contour plot from XYZ data. XYZ are just sparse sampling points. You need to fill the gaps, for example by putting it into a 2D histogram first. For this create a grid with a certain amount of bins in each direction, initialized to all NaN (pseudocode)
float hist2D[bins_x][bins_y] = {NaN, NaN, ...}
then for each XYZ, add the Z value to the bins of the grid if not a NaN, otherwise replace NaN with the Z value. Afterwards use a Laplace filter on the histogram to smooth out the bins still containing a NaN. Finally you can render the grid as contour plot using
glBegin(GL_QUADS);
for(int y=0; y<grid_height; y+=2) for(int x=0; x<grid_width; x+=2) {
glTexCoord1f(hist2D[x ][y ]]); glVertex2i(x ,y);
glTexCoord1f(hist2D[x+1][y ]]); glVertex2i(x+1,y);
glTexCoord1f(hist2D[x+1][y+1]]); glVertex2i(x+1,y+1);
glTexCoord1f(hist2D[x ][y+1]]); glVertex2i(x ,y+1);
}
glEnd();
or you could upload the grid as a 2D texture and use a fragment shader to indirect into the color ramp.
Another way to fill the gaps in sparse XYZ data is to find the 2D Voronoi diagram of the XY set and use this to create the sampling geometry. The Z values for the vertices would be the distance weighted average of the XYZs contributing to the Voronoi cells intersecting.
I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25