OpenGL specular lighting is static - opengl

I am drawing 3d shapes in OpenGL. I am trying to add specular lighting to an object, but it doesn't seem to work. I have done ray tracing, so I know that specular lighting depends on the camera's position. In OpenGL, however, I have a static light spot.
Example:
The bright spot on the left of the arrow shaped object doesn't move, no matter where I view the object from. My light source is in line with the sphere and the arrow. It is a directional light. The shiny spot, however, doesn't move with the camera, it's completely static. It also doesn't appear on the other side of the arrow, where the light hits it from the same angle. Is this how it's supposed to work? How can I get it to move?

Related

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

HLSL fixed lighting location

Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...
Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck

Adding sunlight to a scene

I am rendering some interactive scene in 3D and I am wondering: How do I add sunlight to it? I'll try to explain the best how I have it setup now.
What you see right now is that the directional light (the sun) is denoted by the yellow dot, which I want to replace with a realistic sunlight.
The current order of drawing is:
For all objects:
Do a light depth pass for the shadow.
Then for all objects:
Do a draw pass for the object itself, using the light depth texture.
Where would I add adding a realistic sunlight? I have a few ideas though about it:
After the current drawing order, save the output into a texture, and use a shader that takes the texture and adds sunlight on top of it.
After the current drawing order, use a shader that adds the sunlight simply to what has been drawn so far, such that it will be drawn after everything is on the screen.
Or maybe draw the sunlight before the rest of the scene gets drawn?
How would you deal with rendering a nice sunlight that represents a real life sun?
To realistically simulate sunlight, you probably need to implement some form of global illumination. A lot of the lighting we see on objects comes not directly from the light source, but from light bounced off of other objects. Global illumination simulates the bounced light.
[Global Illumination] take[s] into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).
Another techniques that may not be physically accurate, but gives "nice" looking results is Ambient Occlusion:
ambient occlusion is used to represent how exposed each point in a scene is to ambient lighting. So the enclosed inside of a tube is typically more occluded (and hence darker) than the exposed outer surfaces; and deeper inside the tube, the more occluded (and darker) it becomes.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

OpenGL Spotlight shining through from rear-face

I have a Spotlight source in OpenGL, pointing towards a texture mapped sphere.
I rotate the lightsource with the sphere, such that if I rotate the sphere to the 'non-light' side, that side should be dark.
The odd part is, the spotlight seems to be shining through my sphere (it's a solid, no gaps between triangles. The light seems to be 'leaking' through to the other side.
Any thoughts on why this is happening?
Screenshots:
Front view, low light to emphasize the problem
Back view, notice the round area that is 'shining through'
Its really hard to tell from the images, but:
Check if GL_LIGHT_MODEL_TWO_SIDE is being set (two sided lighting), but more importantly have a look at the normals of the sphere you are rendering.
Edit: Also - change the background colour to something lighter. Oh and make sure you aren't rendering with alpha blending turned on (maybe its a polygon sorting issue).
OK, I'm a nob - I was specifying my normals, but not calling glEnableClientState(GL_NORMAL_ARRAY). Hence all normals were facing one direction (I think that's the default, no?)
Anyway - a lesson learned - always go back over the basics.