GLSL vertex shader on 3d model - opengl

I am currently coding a simple vertex shader for a model. What I want to achieve is something like this :
I have a model of a dragon, nothing too fancy, and I want to shade the wings vertexes to move around a bit, to simulte flying. Now, this is for academic purposes so it doesn't have to be perfect in any way.
What I'm looking for precisely, is how do make for example only the vertices further from the center of the model move ? Is there any way I can compare the position of the vertex to the center of the model, and make it move more or less (using a time variable sent from the OpenGL app) depending on the distance to the center ?
If not, are there any other ways that would be appropriate and relatively simple to do?

You could try this:
#version 330 core
in vec3 vertex;
void main() {
//get the distance from 0,0,0
float distanceFromCenter = length(vertex);
//create a simple squiggly wave function
//you have to change the constant at the end depending
//on the size of your model
float distortionAmount = sin(distanceFromCenter / 10.0);
//the last vector says on which axes to distor, and how much. this example would wiggle on the z-axis
vec3 distortedPosition = vertex + distortionAmount * vec3(0, 0, 1);
gl_Position = distortedPosition;
}
It might not be perfect, but it should you get started.

Related

Clear a scale component from view matrix. Camera-independent scaling

I need for my little program to scale a gizmo by camera distance to conveniently move objects at anytime. I think I have two options:
Calculate distance from gizmo to camera, make a matrix for scaling, multiply all points in cycle:
glm::mat4 scaleMat;
scaleMat= glm::scale(scaleMat, glm::vec3(glm::distance(cameraPos,gizmoPos)));
for (int i = 0; i < vertices.size(); i++)
{
vertices[i] = glm::vec3(scaleMat * glm::vec4(vertices[i], 1.0));
}
Clear the scale component of the view (lookAt) matrix only for gizmo.
If I use the first way my scale of gizmo accumulates the scale and increase size of gizmo any time when I change camera position. I think the second way is more accurate, but how to do that? Thank you!
If you want to apply different scaling each time to the same model, you should not manipulate vertexes (you should never do, in fact), but the Model Matrix. Through that you can manipulate the object without processing the vertexes via code.
I would go with something like this:
glm::mat4 modelMatrix(1.0f);
modelMatrix = glm::scale(modelMatrix,glm::vec3(glm::distance(cameraPos,gizmoPos)));
This will give you the scaled Model View matrix. Now you only need to pass it to your Vertex Shader.
You should have, roughly, something like this:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 M;
void main(){
gl_Position = M * vec4(vertexPosition_modelspace,1);
}
I have not tested the code but it is really similar to the code of one of my projects. There I keep my model matrix so the scaling is accumulated, but if you pass to the vertex shader a brand new matrix each time, nothing will be remembered.
If you need more about passing the uniform to the shader, have a look here at my current project's code.
You can find the scaling inside TARDIS::applyScaling and the shader loading in main.cpp

Skew an image using openGL shaders

To lighten the work-load on my artist, I'm working on making a bare-bones skeletal animation system for images. I've conceptualized how all the parts work. But to make it animate how I want to, I need to be able to skew an image in real time. (If I can't do that, I still know how to make it work, but it'd be a lot prettier if I could skew images to add perspective)
I'm using SFML, which doesn't support image skewing. I've been told I'd need to use OpenGL shaders on the image, but all the advice I got on that was "learn GLSL". So I figured maybe someone over here could help me out a bit.
Does anyone know where I should start with this? Or how to do it?
Basically, I want to be able to deforem an image to give it perspective (as shown in the following mockup)
The images on the left are being skewed at the top and bottom. The images on the right are being skewed at the left and right.
An example of how to skew a texture in GLSL would be the following (poorly poorly optimized) shader. Idealy, you would want to actually precompute your transform matrix inside your regular program and pass it in as a uniform so that you aren't recomputing the transform every move through the shader. If you still wanted to compute the transform in the shader, pass the skew factors in as uniforms instead. Otherwise, you'll have to open the shader and edit it every time you want to change the skew factor.
This is for a screen aligned quad as well.
Vert
attribute vec3 aVertexPosition;
varying vec2 texCoord;
void main(void){
// Set regular texture coords
texCoord = ((vec2(aVertexPosition.xy) + 1.0) / 2.0);
// How much we want to skew each axis by
float xSkew = 0.0;
float ySkew = 0.0;
// Create a transform that will skew our texture coords
mat3 trans = mat3(
1.0 , tan(xSkew), 0.0,
tan(ySkew), 1.0, 0.0,
0.0 , 0.0, 1.0
);
// Apply the transform to our tex coords
texCoord = (trans * (vec3(texCoord.xy, 0.0))).xy;
// Set vertex position
gl_Position = (vec4(aVertexPosition, 1.0));
}
Frag
precision highp float;
uniform sampler2D sceneTexture;
varying vec2 texCoord;
void main(void){
gl_FragColor = texture2D(sceneTexture, texCoord);
}
This ended up being significantly simpler than I thought it was. SFML has a vertexArray class that allows drawing custom quads without requiring the use of openGL.
The code I ended up going with is as follows (for anyone else who runs into this problem):
sf::Texture texture;
texture.loadFromFile("texture.png");
sf::Vector2u size = texture.getSize();
sf::VertexArray box(sf::Quads, 4);
box[0].position = sf::Vector2f(0, 0); // top left corner position
box[1].position = sf::Vector2f(0, 100); // bottom left corner position
box[2].position = sf::Vector2f(100, 100); // bottom right corner position
box[3].position = sf::Vector2f(100, 100); // top right corner position
box[0].texCoords = sf::Vector2f(0,0);
box[1].texCoords = sf::Vector2f(0,size.y-1);
box[2].texCoords = sf::Vector2f(size.x-1,size.y-1);
box[3].texCoords = sf::Vector2f(size.x-1,0);
To draw it, you call the following code wherever you usually tell your window to draw stuff.
window.draw(lines,&texture);
If you want to skew the image, you just change the positions of the corners. Works great. With this information, you should be able to create a custom drawable class. You'll have to write a bit of code (set_position, rotate, skew, etc), but you just need to change the position of the corners and draw.

OpenGL - Shadow cubemap sides have incorrect rotation?

This is more of a technical question than an actual programming question.
I'm trying to implement shadow mapping in my application, which was fairly straight forward for simple spotlights. However, for point lights I'm using shadow cubemaps, which I'm having a lot of trouble with.
After rendering my scene on the cubemap, this is my result:
(I've used glReadPixels to read the pixels of each side.)
Now, the object that should be casting the shadow is being drawn as it should be, what confuses me is the orientation of the sides of the cubemap. It seems to me that the left side (X-) should be connected with the bottom side (y-), so basically rotated by 90° clockwise:
I can't find any examples of how a shadow cubemap is supposed to look like, so I'm unsure whether there's actually something wrong with mine or if it's supposed to look like that. I'm fairly certain my matrices are set up correctly and the shaders for rendering to the shadow map are as simple as can be, so I have my doubts that there's anything wrong with them:
// Projection Matrix:
glm::perspective<float>(90.f,1.f,2.f,m_distance) // fov = 90, near plane = 1, far plane = 2, distance = the light's range
// View Matrices:
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(-1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,1,0),glm::vec3(0,0,-1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,-1,0),glm::vec3(0,0,1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,1),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,-1),glm::vec3(0,1,0));
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP *vec4(vertexPosition_modelspace,1.0);
}
Fragment Shader:
#version 330 core
layout(location = 0) out float fragmentdepth;
void main()
{
fragmentdepth = gl_FragCoord.z;
}
(I actually found these on another thread from here iirc)
Using this cubemap in the actual scene gives me odd results, but I don't know if my main fragment / vertex shaders are at fault here, or if my cubemap is incorrect in the first place, which makes debugging very difficult.
I'd basically just like to have confirmation / disconfirmation whether or not my shadow cubemap 'looks' right and, if it doesn't, what could be causing such behavior.
// Update:
Here's a video of how the shadowmap is updated: http://youtu.be/t9VRZy9uGvs
It looks right to me, could anyone confirm / disconfirm?

openGL: Losing details from a loaded obj file

I'm loading a .obj file into my program (without a .mtl file).
In the vertex shader, I have this:
#version 330
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec3 in_color;
and my vertex structure looks like this:
struct VertexFormat {
glm::vec3 position;
glm::vec3 color;
glm::vec3 normal;
glm::vec2 texcoord;
VertexFormat() { every atribute is glm::vec3(0, 0, 0); }
VertexFormat(glm::vec3 _position, glm::vec3 _normal, glm::vec2 _texcoord, glm::vec3 _color) {
position = _position;
normal = _normal;
texcoord = _texcoord;
// color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));
color = normal;
}
}
Because I don't have a .mtl file, the color attribute depends on the other vertex attributes.
If I let color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));, the object loses some of the details (like a human face is just an ellipsoid).
This does not happen when I let color = normal;.
I want the color to not depend only on the normal attribute because then every object is colored as a rainbow.
Any idea why and how can I make it work?
EDIT:
This is an object with color = normal:
And this is with color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));:
The only things changed between the two pictures are the fact that I commented color = normal; and decommented the other.
In your comment you wrote
I would prefer to not use lighting at all. I don't understand why without lighting the first works (shows the details), while the other one doesn't
Perceived details depend on the color contrast in the final picture. The stronger the contrast, the stronger the detail (there's a strong relation to so called spatial frequencies as well).
Anyway, creased, edges, bulges, etc. in the mesh create a strong local position depending variation of the surface normal, which is what you see. In mathematical terms you could write this as
|| ∂/∂r n(r) ||
where n denotes the normal and r denotes the position, which becomes very large for creases and such.
The variation of a color depending position c(r) however would be
|| ∂/∂r c(r) ||
But since c(r) depends on only r and no local features c acts just like a constant and the local spatial variation in color is constant as well, i.e. has no strong features.
Essentially it means that you can make details visible only based on derivatives of surface features such as the normals.
The easiest way to do this is to use illumination. But you can use other methods as well, for example you can calculate the local variations of the normals (giving you the curvature of the surface) and make stronger curves areas brighter. Or you perform post processing on the screen space geometry, applying something like a first or second order gradient filter.
But you will not get around to apply math to it. There's no such thing as a free meal. Also don't expect people to write code for you without being clear what you actually want.

Using shaders to implement field of view on a 2D environment

I'm implementing dynamic field of view. I decided to use shaders in order to make the illumination better looking and how it affects the walls. Here is the scenario I'm working on:
http://i.imgur.com/QxZVyo7.jpg
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d polygons that compose the walls.
Using the vertex of the polygons I cast shadows, to define the viewable area. (The purple lines are part of the mask I use in the next step)
Using the shader when drawing the shadows on top of the scenario, I avoid the walls to be also obscured.
This way the shadows are cast dynamically along the walls as the field of view changes
I have used the following shader to achieve this. But I feel this is kind of a overkill and really really unefficient:
uniform sampler2D texture;
uniform sampler2D filterTexture;
uniform vec2 textureSize;
uniform float cellSize;
uniform sampler2D shadowTexture;
void main()
{
vec2 position;
vec4 filterPixel;
vec4 shadowPixel;
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy );
for( float i=0 ; i<=cellSize*2 ; i++)
{
position = gl_TexCoord[0].xy;
position.y = position.y - (i/textureSize.y);
filterPixel = texture2D( filterTexture, position );
position.y = position.y - (1/textureSize.y);
shadowPixel = texture2D( texture, position );
if (shadowPixel == 0){
if( filterPixel.r == 1.0 )
{
if( filterPixel.b == 1.0 ){
pixel.a = 0;
break;
}
else if( i<=cellSize )
{
pixel.a = 0;
break;
}
}
}
}
gl_FragColor = pixel;
}
Iterating for each frament just to look for the red colored pixel in the mask seems like a huge overload, but I fail to see how to complete this taks in any other way by using shaders.
The solution here is really quite simple: use shadow maps.
Your situation may be 2D instead of 3D, but the basic concept is the same. You want to "shadow" areas based on whether there is an obstructive surface between some point in the world and a "light source" (in your case, the player character).
In 3D, shadow maps work by rendering the world from the perspective of the light source. This results in a 2D texture where the values represent the depth from the light (in a particular direction) to the nearest obstruction. When you render the scene for real, you check the current fragment's location by projecting it into the 2D depth texture (the shadow map). If the depth value you compute for the current fragment is closer than the nearest obstruction in the projected location in the shadow map, then the fragment is visible from the light. If not, then it isn't.
Your 2D version would have to do the same thing, only with one less dimension. You render your 2D world from the perspective of the "light source". Your 2D world in this case is really just the obstructing quads (you'll have to render them with line polygon filling). Any quads that obstruct sight should be rendered into the shadow map. Texture accesses are completely unnecessary; the only information you need is depth. Your shader doesn't even have to write a color. You render these objects by projecting the 2D space into a 1D texture.
This would look something like this:
X..X
XXXXXXXX..XXXXXXXXXXXXXXXXXXXX
X.............\.../..........X
X..............\./...........X
X...............C............X
X............../.\...........X
X............./...\..........X
X............/.....\.........X
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
C is the character's position; the dots are just regular, unobstructive floor. The Xs are the walls. The lines from C represent the four directions you need to render the 2D lines from.
In 3D, to do shadow mapping for point lights, you have to render the scene 6 times, in 6 different directions into the faces of a cube shadow map. In 2D, you have to render the scene 4 times, in 4 different directions into 4 different 1D shadow maps. You can use a 1D array texture for this.
Once you have your shadow maps, you just use them in your shader to detect when a fragment is visible. To do that, you'll need a set of transforms from window space into the 4 different projections that represent the 4 directions of view that you rendered into. Only one of these will be used for any particular fragment, based on where the fragment is relative to the target.
To implement this, I'd start with just getting a simple case of directional "shadowing" to work. That is, don't use a position; just a direction for a "light". That will test your ability to develop a 2D-to-1D projection matrix, as well as an appropriate camera-space matrix to transform your world-space quads into camera space. Once you have mastered that, then you can get to work doing it 4 times with different projections.