I would like to draw a line between two vertices. But, I don't know the two points in advance.
What I'm trying to do to achieve this is upload a vertex buffer with two vertices at (0, 0, 0). Then, when before the draw call, i.e. before glDrawArrays(GL_LINES, 0, 1), try to translate the two vertices with separate model matrices. I'm using a single shader program. To explain it more, it's sort of like I know the fromVertex and toVertex and I want to draw a line them. I want to apply model matrix 1, to translate vertice 1 from (0, 0, 0) to fromVertex, and model matrix 2, to translate vertice 2 from (0, 0, 0) to toVertex.
uniform mat4 projection;
uniform mat4 camera;
uniform mat4 model;
in vec3 vert;
in vec2 vertTexCoord;
in vec3 vertColor;
out vec2 fragTexCoord;
out vec3 fragColor;
void main() {
// Pass the tex coord straight through to the fragment shader
fragTexCoord = vertTexCoord;
fragColor = vertColor;
// Apply all matrix transformations to vert
gl_Position = projection * camera * model * vec4(vert, 1);
}
I can't translate this into code, because I can't think of a way to set the model matrix uniform variable in the vertex shader to two different values when drawing a line. I'm trying to think of using a scale and/or rotate operation to achieve this. But, I'm guessing there must be an easier way to do this.
I was trying to solve this without using glBufferData in the render loop. Just using that and uploading the vertices solves the problem trivially.
Related
I have been using OpenGL in addition to GLM, Glad, and GLFW to create a 2d game. I want to achieve a simple 2d rotation, presumably along the Z-axis because it would not be 3d. The problem is, when I create a simple model matrix that uses a rotation matrix multiplied with a translation and dilatation matrix, the rotation becomes 3d when the primitive is rendered. The square is stretched and the sides are no longer the same length. Is there a way to avoid this stretch so that the square remains the same proportions while it rotates?
Vertex Shader:
//shader vertex
#version 430 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec2 aTexCoord;
//uniform mat4 transform;
uniform mat4 model;
out vec2 TexCoord;
void main()
{
gl_Position = model * vec4(aPos, 1.0);
TexCoord = aTexCoord;
}
I have a function that iterates through vectors of matrices to handle large batches of objects. The matrices are first created to equal glm::mat4(1.0f).
void Trans::moveBatch(std::vector <glm::vec2>& speed, std::vector <float>& rot)
{
for (unsigned int i = 0; i < speed.size(); i++)
{
batchRotator[i] = glm::rotate(batchRotator[i], glm::radians(rot[i]), glm::vec3(0.0f, 0.0f, 1.0f));
batchMover[i] = glm::translate(batchMover[i], glm::vec3(speed[i].x, speed[i].y, 0.0f));
batchBox[i].x += speed[i].x;
batchBox[i].y += speed[i].y;
batchBox[i].z += speed[i].x;
batchBox[i].w += speed[i].y;
}
}
I then multiply my matrices and send that as the model matrix into the shader.
Thank you very much! I was able to use glm::ortho to create a projection matrix and that solved my problem when I made the world coordinates within 1920/1080 because that was the correct aspect ratio.
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I tried to implement a motion-blur post processing effect as described in GPU Gems 3 Chapter 27, but I am encountering issues because the blur jitters when i move the camera and does not work as expected.
This is my fragment shader:
varying vec3 pos;
varying vec3 normal;
varying vec2 tex;
varying vec3 tangent;
uniform mat4 matrix;
uniform mat4 VPmatrix;
uniform mat4 matrixPrev;
uniform mat4 VPmatrixPrev;
uniform sampler2D diffuseTexture;
uniform sampler2D zTexture;
void main() {
vec2 texCoord = tex;
float zOverW = texture2D(zTexture, texCoord).r;
vec4 H = vec4(texCoord.x * 2.0 - 1.0, (1 - texCoord.y) * 2.0 - 1.0, zOverW, 1.0);
mat4 inv1 = inverse(matrix);
mat4 inv2 = inverse(VPmatrix);
vec4 D = H*(inv2*inv1);
vec4 worldPos = D/D.w;
mat4 prev = matrixPrev*VPmatrixPrev;
vec4 previousPos = worldPos*prev;
previousPos /= previousPos.w;
vec2 velocity = vec2((H.x-previousPos.x)/2.0, (H.y-previousPos.y)/2.0);
vec3 color = vec3(texture2D(diffuseTexture, texCoord));
for(int i = 0; i < 16; i++) {
texCoord += velocity;
vec3 color2 = vec3(texture2D(diffuseTexture, texCoord));
color += color2;
}
color /= 16;
gl_FragColor = vec4(color, 1.0);
}
The uniforms matrix and VPmatrix are the ModelView and Projection matrixes that were got as following:
float matrix[16];
float VPmatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glGetFloatv(GL_PROJECTION_MATRIX, VPmatrix);
The uniforms matrixPrev and VPmatrixPrev are the previous ModelView and Projection matrixes got as following after rendering: (in the code below matrixPrev and VPmatrixPrev are global variables)
for(int i = 0; i < 16; i++) {
matrixPrev[i] = matrix[i];
VPmatrixPrev[i] = VPmatrix[i];
}
All four matrixes are passed to the shader as following:
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrix"), 16, GL_FALSE, VPmatrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrixPrev"), 16, GL_FALSE, matrixPrev);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrixPrev"), 16, GL_FALSE, VPmatrixPrev);
In the shader, the uniform zTexture is a texture containing the depth values of the frame buffer. (Not sure if they are divided by W)
I hoped the shader would work but what I get instead is that when i rotate the camera around the blur jitters really fast with subtle rotations. I tried rendering the zTexture and the result I get is a grayscale image so It seems alright. I also tried setting the fragment color to H.xyz and previousPos.xyz and while rendering H.xyz produces a colored screen, previousPos.xyz produces the same colored screen, except that when the camera rotates the colors seem to invert, so I suspect there is something wrong with extracting the world position from depth.
Am I missing something here? Any help would be greatly appreciated.
forget my previous answer, is a matrix error:
matrix multiplications are in the wrong order or otherwise you must send transpose matrix (explaining why only the rotation was taken in acount, translation/scale components where messed up):
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
becomes
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 1, GL_TRUE, matrix);
(note 16 was not the matrix size but the matrix count, so it should be only only one here)
Another note: you should compute inverse matrix and project*view result matrix in your main application, not the pixel shader: it's done once per pixels but should be done only once per frame!)
Past post note: Google "RuinIsland_GLSL_Demo.zip": it contains many good glsl sample, helped me solve this issue
I'm sorry I don't have an answer but just a clue:
I have EXACTLY the same problem as yours.
I guess I used the same material pointed by google and, like you, the blurring exists only when the camera rotates.
However I have a clue (the same as yours in fact): I think it is because the glsl shader around the net assume our depth texture contains z/w but, like me, you used a genuine depth texture filled using fixed pipeline.
So you only have z and you are missing w in the very first step.
Since "texture2D(zTexture, texCoord).r" does contain only z : we miss the computation to get zOverW.
In the end we are stuck halfway from window spce to to clip space.
I found this: https://www.opengl.org/wiki/Compute_eye_space_from_window_space#From_NDC_to_clip
but my perspective projection matrix does not meet the requierements, perhaps it will help you.
Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.
I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.