me and my budy is trying to create a 2D engine, but we can't get transformation to work, this is how it looks right now. We have created matrices for each element like scaling, translation etc. But we can't get the quad to move or do anything, have we done any math errors? This is what we get now, we just render the vbo and not using the shaders. With shaders I mean that we are GLSL.
http://imgur.com/T0jDSTO
Transform
http://pastebin.com/dKRW244e
Matrix3f
http://pastebin.com/GY0872k6
Matrix4f
http://pastebin.com/f1YNuM09
VBO
http://pastebin.com/5zVgWYtK
BasicShader
http://pastebin.com/RyeSxibQ
Shader
http://pastebin.com/68tJTswq
VertexShader
http://pastebin.com/ffDvsL2Y
FragmentShader
http://pastebin.com/SWT5EKAi
From where we render and update
http://pastebin.com/dTG6HHDX
I think that I have messed up to bind the shaders to the vbo, is that right? And if so, how do I fix it...
Also, I just want answer in modern opengl, so no glBegin() and glEnd() Thank you!
In your vertex shader, you wrote gl_Position = in_Position * transform;
This should return a matrix, rather than a vector. Given that your rectangle is plain white and you have made provisions for colour, I would hazard that your shader failed to compile at all.
gl_Position = transform * in_Position; is the correct order for transforming a vector with a matrix.
Related
I am implementing a Phong Shader in GLSL for an assignment, but I don't get the same specular reflection as I should. The way I understood it, it's the same as Gouraud Shading, but instead of doing all the calculations in the vertex shader, you do them in the fragment shader, so that you interpolate the normals and then apply the phong model at each pixel. As part of the assignment I had to develop also the Gouraud shader and that works as supposed and I thought you just needed to put the vertex shader code into the fragment shader, but that doesn't seem to be the case.
What I do in the vertex shader is that I simply transform the vertex position into view coordinates and I apply the transpose inverse of the model view matrix to the vertex normal. Then, in the fragment shader I apply just the view transform to the light position and use these coordinates to calculate the vectors needed in the Phong model The lighting is almost correct, but some specular light is missing. All the parameters have been tested, so I would assume it's the light's position that is wrong, however I have no idea why my code wouldn't work when it does so in the other shader. Also, I know this isn't the most efficient way of doing it (pre-computing the normal matrix would be faster), I'm just trying to first get it to work properly.
The problem is the way how the light source position is calculated. The following code
vec3 l = normalize(mat3(VMatrix)*light);
treats light as a direction (by normalizing it and because the translation part of the view matrix is ignored), but it actually is a position. The correct code should be something like
vec3 l = (VMatrix * vec4(light, 1.0)).xyz
I don't usually use a flat surface in OpenGL, but recently I've been taking up on making After Effects plugins, and it has a template called Glator which passes a VBO which contains the UVs. However, I have learned by reading ShaderToy fragment shaders that by passing the resolution of the billboard to the fragment shader as a uniform, and doing this:
vec2 p = gl_FragCoord.st / resolution.xy
You can generate a value which is the UV coordinate of the fragment on the flat surface. Am I right?
I wrote shaders for diffuse lightning.
Normals calculating in vertex shader: normal = gl_NormalMatrix * gl_Normal;
But, when I rotate the camera, the normals are also starting to rotate with the camera. How to fix it?
You must be generating your normal matrix incorrectly.
NormalMatrix = transpose(inverse(ModelMatrix * ViewMatrix))
Also, unless you're forced to use gl_NormalMatrix and gl_Normal, you should use shader uniforms and in variables and calculate the matrices yourself rather than using the older model.
If you don't know how to do this, you should find a tutorial on OpenGL 4 to learn the programmable shader pipeline. OGLDev is pretty good.
I've got this not-so-small-anymore tile-based game, which is my first real OpenGL project. I want to render every tile as a 3D object. So at first I created some objects, like a cube and a sphere, provided them with vertex normals and rendered them in immediate mode with flat shading. But since I've got like 10.000 objects per level, it was a bit slow. So I put my vertices and normals into VBOs.
That's where I encountered the first problem: Before using VBOs I just push()ed and pop()ed matrices for every object and used glTranslate / glRotate to place them in my scene. But when I did the same with VBOs, the lighting started to behave strangely. Instead of a fixed lighting position behind the camera, the light seemed to rotate with my objects. When moving around them 180 degrees I could see only a shadow.
So i did some research. I could not find any answer to my specific problem, but I read, that instead of using glTranslate/glRotate one should implement shaders and provide them with uniform matrices.
I thought "perhaps that could fix my problem too" and implemented a first small vertex shader program which only stretched my objects a bit, just to see if I could get a shader to work before focusing on the details.
void main(void)
{
vec4 v = gl_Vertex;
v.x = v.x * 0.5;
v.y = v.y * 0.5;
gl_Position = gl_ModelViewProjectionMatrix * v;
}
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades. And I can't find any helpful information. So I got a few questions:
Can I only use one shader at a time, and when using my own shader, OpenGLs flat shading is turned off? So do I have to implement flat shading myself?
What about my vector normals? I read somewhere, that there is something like a normal-matrix. Perhaps I have to apply operations to my normals as well when modifying vertices?
That your lighting gets messed up with matrix operations changes means, that your calls to glLightfv(..., GL_POSITION, ...) happen in the wrong context (not the OpenGL context, but state of matrices, etc.).
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades
I think you mean Gourad shading (flat shading means something different). The thing is: If you're using a vertex shader you must do everthing the fixed function pipeline did. That includes the lighting calculation. Lighthouse3D has a nice tutorial http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/ as does Nicol Bolas' http://arcsynthesis.org/gltut/Illumination/Illumination.html
What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.