Create UV Coordinates for terrain loaded with heightmap? - c++

I've created a terrain in OpenGL using a heightmap image. The result is an dynamically allocated array that includes all the vertices of the terrain (x,y,z) with a stride of 3 like this:(firstvertexX,firstvertexY,firstvertexZ,secondvertexX,secondvertexY,thirdvertexZ...).
I also calculated the normals (in a similar array) and an extremely bad approximation of the supposed UVs of the terrain (using the ...normals) in order to render it.The rendering has no problem at all,except that as i said the UV approximation is really bad and the texture that is rendered has a "polygon" feel in it (using GL_TRIANGLE_STRIPS).
I want to clarify that i want the terrain to be created in OpenGl and not in any other programm like Blender. So i have to somehow specify a UV coordinate for each vertex in the array mentioned above that containes the vertices. I would also like to say that im not looking for a perfect UV solution(even if somehow there is one), but a way that does a rough approximation in order for the result to be decent.
So my questions are:
Do the UV coordinates need to range from 0 to 1, with no consideration of the terrain's width and length or not?
Does the UV coordinate array need to be the same length as the vertices array?
Is there any simple algorithm or way that i could specify the UVs of the terrain?
The texture that is rendered:
The actual image from the web:
The code that specifies the creation of the arrays :
for (i = 0; i < terrainGridLength - 1 ; i++) { //z
for (j = 0; j < terrainGridWidth; j++) { //x
TerrainVertices[k] = startW + j + xOffset;
TerrainVertices[k + 1] = 20 * terrainHeights[(i + 1)*terrainGridWidth + (j)] + yOffset;
TerrainVertices[k + 2] = startL - (i + 1) + zOffset;
k = k + 3;
TerrainVertices[k] = startW + j +xOffset;
TerrainVertices[k + 1] = 20 * terrainHeights[(i)*terrainGridWidth + j] +yOffset;
TerrainVertices[k + 2] = startL - i + zOffset;
k = k + 3;
float x = RandomFloat(0.16, 0.96);
TerrainUVs[d] = x* (terrainNormals[ 3*((i+1 )*terrainGridWidth + j)]);
TerrainUVs[d + 1] = terrainNormals[3* ((i+1 )*terrainGridWidth + j) + 2 ];
x = RandomFloat(0.21, 0.46);
TerrainUVs[d+2] = x*(terrainNormals[ 3*((i+1 )*terrainGridWidth + j) +1]);
d = d + 3;
x = RandomFloat(0.3, 0.49);
TerrainUVs[d] = x*(terrainNormals[3* ((i )*terrainGridWidth + j) ]);
TerrainUVs[d + 1] = terrainNormals[ 3*((i )*terrainGridWidth + j)+2 ];
x = RandomFloat(0.4, 0.85);
TerrainUVs[d + 2] = x*(terrainNormals[3* ((i )*terrainGridWidth + j) +1]);
d = d + 3;
}
}

I will try to address all of your questions as best I can.
1) Do the UV coordinates need to range from 0 to 1, with no
consideration of the terrain's width and length or not?
UV coordinates are generally in 0 to 1 space, but if you pass in a UV that is not 0 to 1 the way that UV is handled will depend on how you have set up OpenGL to handle that case.
You can tell the texture sampler to treat the coordinates as wrapping with the following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Alternatively you can clamp the UV values with the following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
See more on texture samplers here:
https://www.khronos.org/opengl/wiki/Sampler_Object
Does the UV coordinate array need to be the same length as the
vertices array?
Generally speaking, yes. When texturing some geometry you will usually want at least one UV for each vertex thus you should have the same number of UVS as you do vertices.
BUT you may have more if you have more than one texture applied to the geometry or you may have less if you are texturing your geometry via some other means, like with a fragment shader you write which does not require UVS.
You can also have less if your UVS are indexed, which is when vertices that share the same UV each refer to the same UV in an array through an index. But if you want to keep things simple just go for 1 to 1 and don't worry about the duplicates.
Is there any simple algorithm or way that i could specify the UVs of
the terrain?
One easy way to do this is just to texture each triangle as part of a quad. You terrain should just be a grid of quads which means that you could just iterate through you list of vertices assigning each quad in your mesh with texture coordinates like these:
Even better, if the grid of your terrain is size 1 on x and z you can just plug in the world space coordinates of your mesh on x and z and set your texture sampler to wrap, assuming your mesh isn't rotated. It will still work if your mesh is rotated, but the texture won't be rotated with it.

Related

OpenGL heightmap renderer not producing smooth terrain

I'm writing a terrain renderer with C++ and OpenGL using nested grids and heightmaps, and am having trouble with the higher-detail (closer) grids looking blocky/terraced.
Initially I thought the problem was with the 8-bit heightmaps I was using, but 16-bit ones produce the same result (I'm using l3dt, World Machine and Photoshop to generate different maps).
My code needs to be abstracted from an engine pipeline so the heightmap is applied to the grids using transform feedback in a vertex shader:
void main()
{
float texOffset = 1.0 / mapWidthTexels, mapOffset = scale / mapWidthWorld; //Size of a texel in [0, 1] coordinates and size of a quad in world space
vec2 texCoord = (vertPos.xz * scale + offset) / mapWidthWorld + 0.5; //Texture coordinate to sample heightmap at. vertPos is the input vertex, scale is pow(2, i) where i is the nested grid number, offset is eye position
position = vertPos * scale;
if(vertPos.y == 0.0) //Y coordinate of the input vertex is used as a flag to tell if the vertex is bordering between nested grids
position.y = texture(heightmap, texCoord).r; //If it's not, just sample the heightmap
else
{
//Otherwise get the two adjacent heights and average them
vec2 side = vec2(0.0);
if(abs(vertPos.x) < abs(vertPos.z))
side.x = mapOffset;
else
side.y = mapOffset;
float a = texture(heightmap, texCoord + side).r, b = texture(heightmap, texCoord - side).r;
position.y = (a + b) * 0.5;
}
float mapF = mapWidthWorld * 0.5;
position.xz = clamp(position.xz + offset, -mapF, mapF) - offset; //Vertices outside of the heightmap are clamped, creating degenrate triangles
position.y *= heightMultiplier; //Y component so far is in the [0, 1] range, now multiply it to create the desired height
//Calculate normal
float leftHeight = texture(heightmap, texCoord + vec2(-texOffset, 0.0)).r * heightMultiplier, rightHeight = texture(heightmap, texCoord + vec2(texOffset, 0.0)).r * heightMultiplier;
float downHeight = texture(heightmap, texCoord + vec2(0.0, -texOffset)).r * heightMultiplier, upHeight = texture(heightmap, texCoord + vec2(0.0, texOffset)).r * heightMultiplier;
normal = normalize(vec3(leftHeight - rightHeight, 2.0, upHeight - downHeight));
tex = vertTex; //Pass through texture coordinates
}
RAW 16-bit heightmaps are loaded as such:
std::ifstream file(_path, std::ios::ate | std::ios::binary);
int size = file.tellg();
file.seekg(0, std::ios::beg);
m_heightmapWidth = sqrt(size / 2); //Assume 16-bit greyscale
unsigned short *data = new unsigned short[size / 2];
file.read(reinterpret_cast<char*>(data), size);
if (m_flip16bit) //Dirty endianness fix
{
for (int i = 0; i < size / 2; i++)
data[i] = (data[i] << 8) | ((data[i] >> 8) & 0xFF);
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, m_heightmapWidth, m_heightmapWidth, 0, GL_RED, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
delete[] data;
Other formats are loaded similarly with stb_image.
The resulting terrain looks like this:
https://imgur.com/a/d8tDPGO
As you can see areas with little to know slope have this terraced appearance. What am I doing wrong?
RAW 16-bit heightmaps are loaded as such:
[...]
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, m_heightmapWidth, m_heightmapWidth, 0, GL_RED, GL_UNSIGNED_SHORT, data);
^^^^^^
Nope. The internalFormat parameter controls the format the texture is stored on the GPU, and GL_RED is just 8 Bit in any realistic scenario. You most likely want GL_R16 for a normalized 16Bit unsigned integer format.
Turns out l3dt's textures were the problem, parts that were meant to be underwater turned out terraced. Also, if the height range used in l3dt doesn't match heightMulptiplier in the shader artefacts can arise from that.

Why is this OpenGL-es texture bound to hills in cocos2d 2.0 torn?

This question is related to Repeating OpenGL-es texture bound to hills in cocos2d 2.0
After reading the answers posted in the above post, I've used the following code for computing the vertices and texture coordinates:
CGPoint pt0,pt1;
float ymid = (p0.y + p1.y) / 2;
float ampl = (p0.y - p1.y) / 2;
pt0 = p0;
float U_Off = floor(pt0.x / 512);
for (int j=1; j<_segments+1; j++)
{
pt1.x = p0.x + j*_dx;
pt1.y = ymid + ampl * cosf(_da*j);
float xTex0 = pt0.x/512 - U_Off;
_vertices[vertices++]=CGPointMake(pt0.x, 0);
_vertices[vertices++]=CGPointMake(pt0.x, pt0.y);
_texCoords[texCoords++]=CGPointMake(xTex0, 1.0f);
_texCoords[texCoords++]=CGPointMake(xTex0, 0);
pt0 = pt1;
}
p0 = p1;
But unfortunately, I still get a tear / misalignment in my texture (circled in yellow):
I've attached dumps of the arrays of vertices and texcoords
I'm new to OpenGl, and can't figure out where the miscalculation is. How do I prevent the line (circled in yellow in image) from appearing ?
EDIT: My texture is either 1024x512 or 512x512 depending on the device. I use the following texture parameters:
ccTexParams tp2 = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_CLAMP_TO_EDGE};
Most likely the reason is in non-continuous texture coordinates.
In texcoords dump you have the following coordinates:
(CGPoint) 0x34b0b28 = (x=1.00390625, y=0)
(CGPoint) 0x34b0b30 = (x=0.005859375, y=1)
It means that between these two points texture is mapped from 1 to 0 (in reverse direction). You should continue texcoords after 1.00390625 => 1.005859375 => ... Also, your texture must have power-of-two size and must be set up with REPEAT mode.
If your texture is in atlas and you cannot set REPEAT mode, you may try to clamp texcoords to [0; 1] range and place two edge points with x=1 and x=0 in the same position.
And, at last, if your texture doesn't change in x-axis you may set x = 0.5 for all points.

glTexCoordPointer: meaning of u and v?

I use openctm to save 3d data. Here I can save texture coordinates.
The texture is later loaded this way:
const CTMfloat * texCoords = ctm.GetFloatArray(CTM_UV_MAP_1);
for(CTMuint i = 0; i < numVertices; ++ i)
{
aMesh->mTexCoords[i].u = texCoords[i * 2];
aMesh->mTexCoords[i].v = texCoords[i * 2 + 1];
}
texCoords is the float array (2 floats for one point).
Later the texture is used this way:
glTexCoordPointer(2, GL_FLOAT, 0, &aMesh->mTexCoords[0]);
My problem is that I need to generate the texCoords array. What does u and v mean? Is it the pixel position? Do I have to scale them by 1/255?
While not being related to programming, this tutorial gives a good introduction about what UV coordinates are: http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/UV_Map_Basics
In general you don't "create" or "generate" the UV map programmatically. You have your 3D modeling artists create them as part of the modeling process.

Omnidirectional shadow mapping with depth cubemap

I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.

Texture wrong value in fragment shader

I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25