I want to texture a sphere with a cube map. So far my research has thrown up many many results on Google involving making OpenGL auto generate texture coordinates, but I want to generate my own coordinates.
Given an array of coordinates comprising the vertexes of an imperfect sphere (height mapped but essentially a sphere) centered on 0,0,0, how would one generate texture coordinates for a cube map?
Are you doing this via GLSL? In that case textureCube accepts a vec3 as texture coordinate, which is a unit vector on a sphere. In your case you would take the coordinate of your fragment with respect to the center of the sphere, normalize it and pass it as a coordinate. No need to worry about the internal representation as six two-dimensional textures.
uniform samplerCube cubemap;
varying vec3 pos; // position of the fragment w.r.t. the center of the sphere
/* ... */
vec4 color = textureCube(cubemap, normalize(pos).stp);
It works like that also in fixed-pipeline OpenGL.
By the way, here is how the coordinates are used internally: the largest coordinate in absolute value is used to select which one of the six textures is read from (the sign selects positive or negative). The other two coordinates are used to lookup the texel in the selected map, after being divided by the largest coordinate.
Related
I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.
I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.
I am trying to put a texture in only a part of a sphere.
I have a sphere representing the earth with its topography and a terrain texture for a part of the globe, say satellite map for Italy.
I want to show that terrain over the part of the sphere where Italy is.
I'm creating my sphere drawing a set of triangle strips.
As far as I understand, if I want to use a texture I need to specify a texture coord for each vertex (glTexCoord2*). But I do not have a valid texture for all of them.
So how do I tell OpenGL to skip texture for those vertexes?
I'll assume you have two textures or a color attribute for the remainder of the sphere ("not Italy").
The easiest way to do this would be to create a texture that covers the whole sphere, but use the alpha channel. For example, use alpha=1 for "not italy" and alpha=0 for "italy". Then you could do something like this in your fragment shader (pseudo-code, I did not test anything):
...
uniform sampler2D extra_texture;
in vec2 texture_coords;
out vec3 final_color;
...
void main() {
...
// Assume color1 to be the base color for the sphere, no matter how you get it (attribute/texture), it has at least 3 components.
vec4 color2 = texture(extra_texture, texture_coords);
final_color = mix(vec3(color2), vec3(color1), color2.a);
}
The colors in mix are combined as follows, mix(x,y,a) = x*(1-a)+y*a, this is done component wise for vectors. So you can see that if alpha=1 ("not Italy"), color1 will be picked, and vice versa for alpha=0.
You could extend this to multiple layers using texture arrays or something similar, but I'd keep it simple 2-layer to begin with.