I've adapted the Nvidia CSM example to work on the Opengl core profile using GLM for the math. However, I have noticed a problem which I think is related to scaled models. The shadows seem to shrink the closer I am to them.
In the fragment shader I get the shadow coordinate like so:
vec4 shadow_coord = textureMatrix[index] * vec4(positionCS, 1.0);
In the vertex shader I make positionCS like so (modelView is V * M):
positionCS = (modelView * vertPosition).xyz;
I read somewhere that I should be using the inverse transpose of the model matrix if the model matrix scales the mesh, but that particular example was multiplying the normal vector and not the position vector. I tried it anyway but it simply breaks the shadow mapping completely.
Any ideas?
It seems to only do it on the closer splits of the CSM.
OK, I fixed it. The Nvidia example was missing the Z scale/offsets for the crop matrix (poor demo?).
I added them in where I create the crop matrix and the problem went away:
float scaleZ = 1.0f / (maxZ - minZ);
float offsetZ = -minZ * scaleZ;
shadCrop[2][2] = scaleZ;
shadCrop[2][3] = offsetZ;
I've now got some slight artifacts on the furthest split, but I haven't set any bias just yet.
Related
I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.
I am developing a robot hand simulation program in which I can load and draw a robot hand model(the model built in 3DS MAX) and perform some actions according to the inputs, for an example, joint angles.
the hand model contains many different parts, each part have it's own model matrix and VBO storing vertex coordinates, normals and uvs.
I want to perform rotation using glm::rotate() on a joint around a certain axis(not coordinate axis), so I was going to translate it to the world origin, and perform rotation, and finally move back.
The problem here is that when I perform rotation after translation, the world origin it rotates around seems changed as well, it looks like the world origin have been translated in the same way.
also, I don't really know whether glm::rotate rotates around world origin? it's confusing that why it changes, I have tested rotating directly not to translate before rotation, and it actually rotates around world origin,but when I perform translation before rotation, it changes.
More details:
I also think the problem are most likely in the coordinate system. I used a Robothand class to represent the robot hand object, and those "parts" are objects of Model class, there are parent-child relationships among models, so every transformation performed in one specific model will have the same effect on their children model.
Vertex shader
void main(){
mat4 MVP = P * V * M;
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
vec3 vertexPosition_cameraspace = (V * M * vec4(vertexPosition_modelspace,1)).xyz;
EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
vec3 LightPosition_cameraspace = (V * vec4(LightPosition_worldspace,1)).xyz;
LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz;
UV = vertexUV;
Color = vertexColor;
}
I have to explain that the robot hand model is built in 3ds max, and exported as .obj file.In 3DS MAX, all the models share the same world origin, which means the their origin of model coordinate systems are in the same position. There are not any problems if I rotate any model without translation, it will rotate around world coordinate axis, but the real problem is, if I translate first, the world origin will be translated in the same way, it looks like rotating in a new coordinate system.
I'm trying to implement vertex shader code to achieve the "billboard" behaviour on a given vertex mesh. What I want is to define the mesh normally (like a 3D object) and then have it always facing the camera. I also need it to always have the same size (screen-wise). This two "effects" should happen:
The only difference in my case is that instead of a 2-D bar, I want to have a 3D-object.
To do so, I'm trying to follow the alternative 3 in this tutorial (the same where the images are taken from), but I can't figure out many of the assumptions they made (probably due to my lack of experience in graphics and OpenGL).
My shader applies the common transformation stack to vertices, i.e.:
gl_Position = project * view * model * position;
Where position is the input attribute with the vertex location in world-space. I want to be able to apply model-transformations (such as translation, scale and rotation) to modify the orientation of the object with respect to the camera. I understand the concepts explained in the tutorial but I can't seem to understand ho to apply them in my case.
What I've tried is the following (extracted from this answer, and similar to the tutorial):
uniform vec4 billbrd_pos;
...
gl_Position = project * (view * model * billbrd_pos + vec4(position.xy, 0, 0));
But what I get is a shape the size of which is bigger when is closer to the camera, and smaller otherwise. Did I forgot something?
Is is possible to do this in the vertex shader?
uniform vec4 billbrd_pos;
...
vec4 view_pos = view * model * billbrd_pos;
float dist = -view_pos.z;
gl_Position = project * (view_pos + vec4(position.xy*dist,0,0));
That way the fragment depths are still correct (at billbrd_pos depth) and you don't have to keep track of the screen's aspect ratio (as the linked tutorial does). It's dependent on the projection matrix though.
I am attempting to load models exported from Blender into OpenGL. Particularly, I followed the source code from this tutorial to help me get started. Because the loader is fairly simple, it only read in the vertex coordinates as well as face indices, ignoring the normal and tex.
It then calculates the normal for each face:
float coord1[3] = { Faces_Triangles[triangle_index], Faces_Triangles[triangle_index+1],Faces_Triangles[triangle_index+2]};
float coord2[3] = {Faces_Triangles[triangle_index+3],Faces_Triangles[triangle_index+4],Faces_Triangles[triangle_index+5]};
float coord3[3] = {Faces_Triangles[triangle_index+6],Faces_Triangles[triangle_index+7],Faces_Triangles[triangle_index+8]};
float *norm = this->calculateNormal( coord1, coord2, coord3 );
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
I have 2 questions.
How do I know if the normal is facing inwards or outwards? Is there some ordering of the vertices in each row in the .obj file that gives indication how to calculate the normal?
In the initialization function, he uses GL_SMOOTH. Is this incorrect, since I need to provide normals for each vertex if using GL_SMOOTH instead of GL_FLAT?
Question 1
glFrontFace determines wind order
Wind order means, what order a set of vertexes should appear for a normal to be considered positive. Consider the triangle below. It's vertexes are defined clockwise. If we told OpenGL glFrontFace(GL_CW) (That clockwise means front face) then the normal would essentially be sticking right out of the screen towards you in order to be considered "outward".
On a side note, counter-clockwise is the default and what you should stick with.
No matter what, you need should really define normals especially if you want to do any lighting in your scene as they are used for lighting calculation. glFrontFace just lets you tell OpenGL which way you want to interpret what the front of a polygon is.
In the above example, and below diagram, if we told OpenGL that we define faces counter-clockwise and also glEnabled glCullFace and set it to GL_BACK then our triangle wouldn't show up because we would be looking at the back of it and we told OpenGL not to show the back of polygons.
You can read more about face culling here: Face Culling.
Wavefront .obj has support for declaring normals in a file if you don't want to create them yourself. Just make sure your exporter adds them.
Additionaly, Wavefront wants each vertex to have a normal defined:
f v1//vn1 v2//vn2 v3//vn3 ...
Where vN is the vertex of the f face and vnN is the normal of the vertex. By providing a normal for each vertex, you achieve a smoother looking surface than you would by defining a normal per face or by setting all of the normals of vertexes of the same face to be the same. Take a look at this question to see the difference you can make on a sphere: OpenGL: why do I have to set a normal with glNormal?
If your .obj file doesn't have normals defined, I would use the face definition order and cross two edges of the defined face. Consider the method used here: Calculating a Surface Normal
Edit
I think I may be a little confusing. The front face of a polygon is only slightly related to its normals. Normals are only really used for lighting calculations. You don't have to have them, but they are one of the big variables used in calculating how lit your object is.
I am explaining the "front-faced-ness" of a polygon at the same time because it sort of makes sense, when talking about convex polygons, that your normal would stick out of the "front" of your triangle with respect to the shape you are making.
If you created a huge cave, or if your camera were to mostly reside inside of some concave shape, then it would make sense to have your normals point inwards since your light sources are probably going to want to bounce off of the inside of your shape.
Question 2
GL_SMOOTH determines which of the shading models you want to use with glShadeModel
GL_SMOOTH means smooth shading, where color is actually interpolated between each vertex, vs GL_FLAT means flat shading, where only one one color will be used. Typically, you'll use the default value, GL_SMOOTH.
You don't have to define normals for each vertex in either case. However, if you want GL_SMOOTH to look "good" you'll probably want to as it will interpolate as it renders between each vertex rather than just picking one vertex for properties.
Also, bear in mind that all of this goes out the window whenever you leave the fixed-function pipeline and start using shaders.
I'm writing a game using OpenGL and I am trying to implement shadow volumes.
I want to construct the shadow volume of a model on the GPU via a vertex shader. To that end, I represent the model with a VBO where:
Vertices are duplicated such that each triangle has its own unique three vertices
Each vertex has the normal of its triangle
For reasons I'm not going to get into, I was actually doing the above two points anyway, so I'm not too worried about the vertex duplication
Degenerate triangles are added to form quads inside the edges between each pair of "regular" triangles
Using this model format, inside the vertex shader I am able to find vertices that are part of triangles that face away from the light and move them back to form the shadow volume.
What I have left to figure out is what transformation exactly I should apply to the back-facing vertices.
I am able to detect when a vertex is facing away from the light, but I am unsure what transformation I should apply to it. This is what my vertex shader looks like so far:
uniform vec3 lightDir; // Parallel light.
// On the CPU this is represented in world
// space. After setting the camera with
// gluLookAt, the light vector is multiplied by
// the inverse of the modelview matrix to get
// it into eye space (I think that's what I'm
// working in :P ) before getting passed to
// this shader.
void main()
{
vec3 eyeNormal = normalize(gl_NormalMatrix * gl_Normal);
vec3 realLightDir = normalize(lightDir);
float dotprod = dot(eyeNormal, realLightDir);
if (dotprod <= 0.0)
{
// Facing away from the light
// Need to translate the vertex along the light vector to
// stretch the model into a shadow volume
//---------------------------------//
// This is where I'm getting stuck //
//---------------------------------//
// All I know is that I'll probably turn realLightDir into a
// vec4
gl_Position = ???;
}
else
{
gl_Position = ftransform();
}
}
I've tried simply setting gl_position to ftransform() - (vec4(realLightDir, 1.0) * someConstant), but this caused some kind of depth-testing bugs (some faces seemed to be visible behind others when I rendered the volume with colour) and someConstant didn't seem to affect how far the back-faces are extended.
Update - Jan. 22
Just wanted to clear up questions about what space I'm probably in. I must say that keeping track of what space I'm in is the greatest source of my shader headaches.
When rendering the scene, I first set up the camera using gluLookAt. The camera may be fixed or it may move around; it should not matter. I then use translation functions like glTranslated to position my model(s).
In the program (i.e. on the CPU) I represent the light vector in world space (three floats). I've found during development that to get this light vector in the right space of my shader I had to multiply it by the inverse of the modelview matrix after setting the camera and before positioning the models. So, my program code is like this:
Position camera (gluLookAt)
Take light vector, which is in world space, and multiply it by the inverse of the current modelview matrix and pass it to the shader
Transformations to position models
Drawing of models
Does this make anything clearer?
the ftransform result is in clip-space. So this is not the space you want to apply realLightDir in. I'm not sure which space your light is in (your comment confuses me), but what is sure is that you want to add vectors that are in the same space.
On the CPU this is represented in world
space. After setting the camera with
gluLookAt, the light vector is multiplied by
the inverse of the modelview matrix to get
it into eye space (I think that's what I'm
working in :P ) before getting passed to
this shader.
multiplying a vector by the inverse of the mv matrix brings the vector from view space to model space. so you're saying your light-vector (in world space), is applied a transform that does view->model. It makes little sense to me.
We have 4 spaces:
model space: the space where your gl_Vertex is defined in.
world space: a space that GL does not care about in general, that represents an arbitrary space to locate the models in. It's usually what the 3d engine works in (it maps to our general understanding of world coordinates).
view space: a space that corresponds to the referencial of the viewer. 0,0,0 is where the viewer is, looking down Z. Obtained by multiplying gl_Vertex by the modelview
clip space: the magic space that the matrix projection brings us in. result of ftransform is in this space (so is gl_ModelViewProjectionMatrix * gl_Vertex )
Can you clarify exactly which space your light direction is in ?
What you need to do, however, is make the light vector addition in either model, world or view space: Bring all the bits of your operation in the same space. E.g. for model space, just compute the light direction in model space on CPU, and do a:
vec3 vertexTemp = gl_Vertex + lightDirInModelSpace * someConst
then you can bring that new vertex position in clip space with
gl_Position = gl_ModelViewProjectionMatrix * vertexTemp
Last bit, don't try to apply vector additions in clip-space. It won't generally do what you think it should do, as at that point you are necessarily dealing with homogeneous coordinates with non-uniform w.