I'm loading a .obj file into my program (without a .mtl file).
In the vertex shader, I have this:
#version 330
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec3 in_color;
and my vertex structure looks like this:
struct VertexFormat {
glm::vec3 position;
glm::vec3 color;
glm::vec3 normal;
glm::vec2 texcoord;
VertexFormat() { every atribute is glm::vec3(0, 0, 0); }
VertexFormat(glm::vec3 _position, glm::vec3 _normal, glm::vec2 _texcoord, glm::vec3 _color) {
position = _position;
normal = _normal;
texcoord = _texcoord;
// color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));
color = normal;
}
}
Because I don't have a .mtl file, the color attribute depends on the other vertex attributes.
If I let color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));, the object loses some of the details (like a human face is just an ellipsoid).
This does not happen when I let color = normal;.
I want the color to not depend only on the normal attribute because then every object is colored as a rainbow.
Any idea why and how can I make it work?
EDIT:
This is an object with color = normal:
And this is with color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));:
The only things changed between the two pictures are the fact that I commented color = normal; and decommented the other.
In your comment you wrote
I would prefer to not use lighting at all. I don't understand why without lighting the first works (shows the details), while the other one doesn't
Perceived details depend on the color contrast in the final picture. The stronger the contrast, the stronger the detail (there's a strong relation to so called spatial frequencies as well).
Anyway, creased, edges, bulges, etc. in the mesh create a strong local position depending variation of the surface normal, which is what you see. In mathematical terms you could write this as
|| ∂/∂r n(r) ||
where n denotes the normal and r denotes the position, which becomes very large for creases and such.
The variation of a color depending position c(r) however would be
|| ∂/∂r c(r) ||
But since c(r) depends on only r and no local features c acts just like a constant and the local spatial variation in color is constant as well, i.e. has no strong features.
Essentially it means that you can make details visible only based on derivatives of surface features such as the normals.
The easiest way to do this is to use illumination. But you can use other methods as well, for example you can calculate the local variations of the normals (giving you the curvature of the surface) and make stronger curves areas brighter. Or you perform post processing on the screen space geometry, applying something like a first or second order gradient filter.
But you will not get around to apply math to it. There's no such thing as a free meal. Also don't expect people to write code for you without being clear what you actually want.
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
The gist of my problem is best described by this image:
This is the beginnings of a 2D, top-down game. The track is randomly generated and each segment drawn as a quad, with the green edge color being offset by random noise in the fragment shader. It looks perfect as a static image.
The main character is always centered in the image. When you move it, the noise changes. That's not what I intend to happen. If you move the character, the image you see should just slide across the screen. The noise calculation is changing and the edges change as the image moves.
I should mention now that the track's vertices are defined by integer points (eg, 750, -20). The vertex shader passes the original vertices to the fragment shader, unmodified by the projection or camera offset. The fragment shader uses these values to color the segments, producing the image above.
Here's the fragment shader with a portion purposefully commented out:
#version 430 core
layout(location = 7) in vec4 fragPos;
layout(location = 8) in flat vec4 insideColor;
layout(location = 9) in flat vec4 edgeColor;
layout(location = 10) in flat vec4 activeEdges; // 0 = no edges; 1 = north; 2 = south; 4 = east, 8 = west
layout(location = 11) in flat vec4 edges; // [0] = north; [1] = south; [2] = east, [3] = west
layout(location = 12) in flat float aspectRatio;
layout(location = 1) uniform vec4 offset;
out vec4 diffuse;
vec2 hash2( vec2 p )
{
return fract(sin(vec2(dot(p,vec2(127.1,311.7)),dot(p,vec2(269.5,183.3))))*43758.5453);
}
void main()
{
vec2 r = hash2(fragPos.xy) * 30.0f;
/*vec2 offset2 = floor(vec2(offset.x/2.001, -offset.y/1.99999));
vec2 r = hash2(gl_FragCoord.xy - offset2.xy) * 10;*/
float border = 10.f;
float hasNorth = float((int(activeEdges[0]) & 1) == 1);
float hasSouth = float((int(activeEdges[0]) & 2) == 2);
float hasEast = float((int(activeEdges[0]) & 4) == 4);
float hasWest = float((int(activeEdges[0]) & 8) == 8);
float east = float(fragPos.x >= edges[2] - border - r.x);
float west = float(fragPos.x <= (edges[3] + border + r.x));
float north = float(fragPos.y <= edges[0] + border + r.y);
float south = float(fragPos.y >= (edges[1] - border - r.y));
vec4 c = (east * edgeColor) + (west * edgeColor) + (north * edgeColor) + (south * edgeColor);
diffuse = (c.a == 0 ? (vec4(1, 0, 0, 1)) : c);
}
The "offset" uniform is the camera position. It is also used by the vertex shader.
The vertex shader has nothing special going on. It does simple projection on the original vertex and then passes in the original, unmodified vertex to fragPos.
void main()
{
fragPos = vPosition;
insideColor = vInsideColor;
edgeColor = vEdgeColor;
activeEdges = vActiveEdges;
edges = vEdges;
aspectRatio = vAspectRatio;
gl_Position = camera.projection * (vPosition + offset);
}
With this setup, the noisy edges move a lot while moving the camera around.
When I switch to the commented out portion of the fragment shader, calculating noise by subtracting the camera position with some math applied to it, the north/south edges are perfectly stable when moving the camera left/right. When moving up/down, the east/west edges move ever so slightly and the north/south edges are rather unstable.
Questions:
I don't understand why the noise is unstable when using the fragment shader-interpolated vertex positions. ([720,0] - [0,0]) * 0.85 should not change depending on the camera. And yet it does.
Using the viewport-based fragment position modified by the camera position stabilizes the image, but not completely, and I'm at a loss to see why.
This is undoubtedly something stupid but I can't see it. (Also, feel free to critique the code.)
You're probably running into floating-point precision issues. The noise function you're using seems rather sensitive to rounding errors, and rounding errors do happen during all those coordinate transformations. What you could do to mitigate that is use integer arithmetic: The builtin variable gl_FragCoords contains the (half-)integer coordinates of the onscreen pixel as its first two coordinates. Cast those to integers and subtract the integer camera offset to get exact integer values you can insert into the noise function.
But this particular noise function does not seem like a good idea here at all, it's clunky and hard to use, and most importantly it doesn't look good at different scales. What if you want to zoom in or out? When that happens, all sorts of wacky things will happen to the border of your track.
Instead of trying to fix that noise function with integer arithmetic, you could simply use a fixed noise texture, and sample from that texture appropriately instead of writing a noise function. That way your noise won't be sensitive to rounding errors, and it'll stay identical when zooming in or out.
But I should note that I don't think what you're doing is an appropriate use of procedural shaders at all. Why not just construct a tileset of track piece tiles, and use that tileset to lay down some track using a vertex buffer filled with simple textured rectangles? It has several advantages over procedural fragment shaders:
You or your artist(s) get to decide how exactly the track and its border looks, maybe with proper blades of grass or whatever the border is supposed to be, maybe some pebbles or cobblestones, you know, the sort of details you could see on such a track, and freely pick the color scheme, without having to do super advanced shader wizardry.
Your track is not limited to rectangular shapes, and can connect easily.
Your track does not require a special-purpose shader to be enabled. Seriously, don't underestimate the value of keeping your shaders nonspecific.
The tileset can trivially be extended to other pieces of scenery, while the custom shader cannot.
I'm having a problem rendering. The object in question is a large plane consisting of two triangles. It should cover most of the area of the window, but parts of it disappear and reappear with the camera moving and turning (I never see the whole plane though)
Note that the missing parts are NOT whole triangles.
I have messed around with the camera to find out where this is coming from, but I haven't found anything.
I haven't added view frustum culling yet.
I'm really stuck as I have no idea at which part of my code I even have to look at to solve this. Searches mainly turn up questions about whole triangles missing, this is not what's happening here.
Any pointers to what the cause of the problem may be?
Edit:
I downscaled the plane and added another texture that's better suited for testing.
Now I have found this behaviour:
This looks like I expect it to
If I move forward a bit more, this happens
It looks like the geometry behind the camera is flipped and rendered even though it should be invisible?
Edit 2:
my vertex and fragment shaders:
#version 330
in vec3 position;
in vec2 textureCoords;
out vec4 pass_textureCoords;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position, 1);
pass_textureCoords = vec4(textureCoords/gl_Position.w, 0, 1/gl_Position.w);
gl_Position = gl_Position/gl_Position.w;
}
#version 330
in vec4 pass_textureCoords;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main()
{
fragColor= texture(textureSampler, pass_textureCoords.xy/pass_textureCoords.w);
}
Many drivers do not handle big triangles that cross the z-plane very well, as depending on your precision settings and the drivers' internals these triangles may very well generate invalid coordinates outside of the supported numerical range.
To make sure this is not the issue, try to manually tessellate the floor in a few more divisions, instead of only having two triangles for the whole floor.
Doing so is quite straightforward. You'd have something like this pseudocode:
division_size_x = width / max_x_divisions
division_size_y = height / max_y_divisions
for i from 0 to max_x_divisions:
for j from 0 to max_y_divisions:
vertex0 = { i * division_size_x, j * division_size_y }
vertex1 = { (i+1) * division_size_x, j * division_size_y }
vertex2 = { (i+1) * division_size_x, (j+1) * division_size_y }
vertex3 = { i * division_size_x, (j+1) * division_size_y }
OutputTriangle(vertex0, vertex1, vertex2)
OutputTriangle(vertex2, vertex3, vertex0)
Apparently there is an error in my matrices that caused problems with vertices that are behind the camera. I deleted all the divisions by w in my shaders and did glPosition = -gl_Position (initially just to test something), it works now.
I still need to figure out the exact problem but it is working for now.
I am currently coding a simple vertex shader for a model. What I want to achieve is something like this :
I have a model of a dragon, nothing too fancy, and I want to shade the wings vertexes to move around a bit, to simulte flying. Now, this is for academic purposes so it doesn't have to be perfect in any way.
What I'm looking for precisely, is how do make for example only the vertices further from the center of the model move ? Is there any way I can compare the position of the vertex to the center of the model, and make it move more or less (using a time variable sent from the OpenGL app) depending on the distance to the center ?
If not, are there any other ways that would be appropriate and relatively simple to do?
You could try this:
#version 330 core
in vec3 vertex;
void main() {
//get the distance from 0,0,0
float distanceFromCenter = length(vertex);
//create a simple squiggly wave function
//you have to change the constant at the end depending
//on the size of your model
float distortionAmount = sin(distanceFromCenter / 10.0);
//the last vector says on which axes to distor, and how much. this example would wiggle on the z-axis
vec3 distortedPosition = vertex + distortionAmount * vec3(0, 0, 1);
gl_Position = distortedPosition;
}
It might not be perfect, but it should you get started.
I'm trying to implement a ground fog shader for my terrain rendering engine.
The technique is described in this article: http://www.iquilezles.org/www/articles/fog/fog.htm
The idea is to consider the ray going from the camera to the fragment and integrate the fog density function along this ray.
Here's my shader code:
#version 330 core
in vec2 UV;
in vec3 posw;
out vec3 color;
uniform sampler2D tex;
uniform vec3 ambientLightColor;
uniform vec3 camPos;
const vec3 FogBaseColor = vec3(1., 1., 1.);
void main()
{
vec3 light = ambientLightColor;
vec TexBaseColor = texture(tex,UV).rgb;
//***************************FOG********************************************
vec3 camFrag = posw - camPos;
float distance = length(camFrag);
float a = 0.02;
float b = 0.01;
float fogAmount = a * exp(-camPos.z*b) * ( 1.0-exp( -distance*camFrag.z*b ) ) / (b*camFrag.z);
color = mix( light*TexBaseColor, light*FogBaseColor, fogAmount );
}
The first thing is that I don't understand how to choose a and b and what are their physical role in the fog density function.
Then, the result is not what I expect…
I have a ground fog but the transition of fogAmount from 0 to 1 is always centered at the camera altitude. I've tried a lot of different a and b but when I don't have a transition at camera altitude, I either have a full fogged or not fogged at all terrain.
I checked the data I use and everything's correct:
camPos.z is the altitude of my camera
camFrag.z is the vertical component of the vector going from the camera to the fragment
I can't get to understand what part of the equation cause this.
Any idea about this ?
EDIT : Here's the effect I'm looking for :
image1
image2
This is a pretty standard application of atmospheric scattering.
It is discussed under the umbrella of volumetric lighting usually, which involves transmittance of light through different media (e.g. smoke, air, water). In cutting-edge shader-based graphics, this can be achieved in real-time using ray-marching or if there is only one uniform participating medium (as it is in this case -- the fog only applies to air), simplified to integration over some distance.
Ordinarily you would ray-march through participating media in order to determine the properties of light transfer, but this application is simplified to assume a medium that has well-defined distribution characteristics and that is where the coefficients you are confused about come from. The density of fog varies exponentially with distance, and this is what b is controlling, likewise it also varies with altitude (not shown in the equation directly below).
(source: iquilezles.org)
What this article introduces to the discussion, however, are poorly named coefficients a and b. These control in-scattering and extinction. The author repeatedly refers to the extinction coefficient as extintion, which really makes no sense to me - hopefully this is just because English was not the author's native language. Extinction can be thought of as how quickly light is absorbed, and it describes the opacity of a medium. If you want a more theoretical basis for all of this, have a look at the following paper.
With this in mind, take another look at the code from your article:
vec3 applyFog( in vec3 rgb, // original color of the pixel
in float distance, // camera to point distance
in vec3 rayOri, // camera position
in vec3 rayDir ) // camera to point vector
{
float fogAmount = c*exp(-rayOri.y*b)*(1.0-exp(-distance*rayDir.y*b))/rayDir.y;
vec3 fogColor = vec3(0.5,0.6,0.7);
return mix( rgb, fogColor, fogAmount );
}
You can see that c in this code actually a from the original equation.
More importantly, there is an additional expression here:
This additional expression controls the density with respect to altitude. Judging by your implementation of the shader, you have not correctly implemented the second expression. camFrag.z is very likely not altitude, but rather depth. Furthermore, I do not understand why you are multiplying it by the b coefficient.
I found a method that gives the result I was looking for.
The method is described in this article of Eric Lengyel: http://www.terathon.com/lengyel/Lengyel-UnifiedFog.pdf
It explains how to create a fog layer with density and altitude parameters. You can fly through it, it progressively blends all the geometry above the fog.