Point Light not rendering - c++

I'm trying to render a couple point lights in my scene, but having trouble getting the actual lights to illuminate. The only light I got to work is a directional light which lights up the scene initially:
Where the two rocks are displayed are the same positions as the two point lights. I've toyed around with the diffuse, color and attenuation values, but got the same results. But when changed the ambiance of either light, the color changed:
My calculations within GLSL are correct and my uniforms are read correctly as well. But somehow I lost my diffuse intensity. m_pointLight[0].DiffuseIntensity doesn't registers a change, but if the commented AmbientIntensity is used, the scene is tinted red.
m_scale += 0.0057f;
// changing the value makes scene red
m_pointLight[0].AmbientIntensity = 0.5f;
// changing the value does nothing
m_pointLight[0].DiffuseIntensity = 1.5f;
// light color is red
m_pointLight[0].Color = glm::vec3(1.0f, 0.0f, 0.0f);
// position of light (same as rock)
m_pointLight[0].Position = glm::vec3(3.0f, 1.0f, FieldDepth * (cosf(m_scale) + 1.0f) / 2.0f);
// Light decay
m_pointLight[0].Attenuation.Linear = 0.01f;

Not a real answer but I figured out part of what my problem was:
The problem was the angle I was looking from. Something is quite wrong with my code. I set my camera position to (0, 100, 75) originally. After moving the a small block of code to where I I set my perspective and other uniforms, a bit of light started to show from the point light. The problem then was when the two rocks moved, closer to the center of the floor, the point light of the first rock started to glow red. Same goes for the second rock, but it was set to blue. But as the rocks moved closer to the camera, the colors faded gradually until it appeared off. Then when it moved back to the center, the colors gradually got brighter to full capacity.
So then I was thinking this may have to do with the angle that I'm at. So from the camera position (0, 100, 75), I decided to look from the complete opposite end of the floor (-75 for the z-axis). And this is what I got:
The only problem remaining is that depending on where the camera is, the point light closest to the camera doesn't show (other than on the underside of the closest rock). The bottom right image is the only one displaying all lights correctly:
If some could explain what is going on, it would be greatly appreciated.

Related

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

How to set up OpenGL perspective correction?

I am dealing with an experimental setup where a simple picture is being displayed to a gene-modified fly. The picture is projected on a screen with certain distances to that fly.
Now it's my turn to set up the perspective correction, so that the displayed image, for example a horizontal bar, appears wider in a longer distance to the fly's point of view (experimental setup) . The code now looks like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if(!USEFRUSTUM)
gluPerspective(90.0f, 2 * width/height, 0.1f, 100.0f);
else
glFrustum(92.3f, 2.3f, -25.0f, 65.0f, 50.0f, 1000.0f);
The values were entered by someone a few years ago and we figured out they are not accurate anymore. However, I am confused which values to enter or to change to make the projection work properly, because as you can see in the experimental setup the fly's field of view is a bit tilted.
I thought about those values:
fovy = angle between a and c
measure width and height on the projection screen
but what is zNear? Should I measure the distance from fly to the top or the bottom of the screen? I dont't get why somebody entered 0.1f, cause that seems for me too near.
How can I know the value of zFar? Is it the maximum distance of an object to the fly?
I got my information on glPerspective from: https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html
I also checked Simplest way to set up a 3D OpenGL perspective projection , but this post doesn't treat my experimental setup, which is the source of my confusion.
Thank you for any help!
This is one of the prime examples where the frustum method is easier to use than perspective. A frustum is essentially a clipped pyramid. Imagine your fly at the tip of a four sided, tilted pyramid. The near value gives the distance to the pyramid's base and the left, right, bottom and top the perpendicular distance of each side of the pyramids base to the tip. It's perfectly reasonable that the tip is "outside" of the base area. Or in case of your fly the center might be just above the "left" edge.
So assuming your original picture we have this:
"Near" gives the "distance" to the projection screen and right (and of course also top, bottom and left) the respective distances of the tip, err, fly perpendicular to the edges of the screen.
"far" is not important for the projective features and is used solely for determining the depth range.
So what you can program into is the following:
double near = ${distance to screen in mm};
double left = ${perpendicular to left screen edge in mm};
double right = ${perpendicular to right screen edge in mm};
double top = ${perpendicular to top screen edge in mm};
double bottom = ${perpendicular to bottom screen edge in mm};
double s = ${scaling factor from mm into OpenGL units};
double zrange = ${depth range in OpenGL units the far plane is behind near};
glFrustum(s*left, s*right, s*bottom, s*top, s*near, s*near + zrange);
Keep in mind that left, right, top and bottom may be negative. So say you want a symmetrical (unshifted) view in one direction, that left = -bottom and a shift is essentially adding/subtracting to both the same offset.

OpenGL specular lighting is static

I am drawing 3d shapes in OpenGL. I am trying to add specular lighting to an object, but it doesn't seem to work. I have done ray tracing, so I know that specular lighting depends on the camera's position. In OpenGL, however, I have a static light spot.
Example:
The bright spot on the left of the arrow shaped object doesn't move, no matter where I view the object from. My light source is in line with the sphere and the arrow. It is a directional light. The shiny spot, however, doesn't move with the camera, it's completely static. It also doesn't appear on the other side of the arrow, where the light hits it from the same angle. Is this how it's supposed to work? How can I get it to move?

OpenGL Lighting and Positioning

I apologize, because this seems like a pretty straightforward problem that should be obvious, but it appears that I'm clueless.
I'm writing an engine and using Light 0 as the "sun" for the scene. I figure, ideally, the sun will be a fixed light source, not a vector, so the w/fourth index in the position vector should be 1.0f.
I setup the scene's Ortho viewpoint, then setup the light to be at the position of the character (x/y are coordinates of the plane terrain, with a positive z facing the camera and indicating "height" on the terrain -- the scene is also rotated for an isometric view on the x axis).
The character is always centered in the screen -- but I've noticed that for some reason, the light seems to shine at a position between the character and world coordinates 0,0,0. The further the character gets from world coordinates 0,0,0, the larger/wider the light is. If the character is at 0,0,0, the light is very small. If the character moves to something like 0,200,0, the light is HUGE.
I'm just trying to get my feet wet and have a "light" that follows the character (and then adjust for position, etc, later, to create a sun).
Graphics.BeginRenderingLayer();
{
Video.MapRenderingMode();
Graphics.BeginLightingLayer( Graphics.AmbientR, Graphics.AmbientG, Graphics.AmbientB, Graphics.DiffuseR, Graphics.DiffuseG, Graphics.DiffuseB, pCenter.X, pCenter.Y, pCenter.Z );
{
Graphics.BeginRenderingLayer();
{
Graphics.CenterCamera( pCenter.X, pCenter.Y, pCenter.Z );
RenderMap( pWorld, pCenter, pCoordinate );
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer();
{
Graphics.DrawMan( pCenter );
}
Graphics.EndRenderingLayer();
}
Graphics.EndLightingLayer();
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer = PushMatrix, EndRenderingLayer = PopMatrix
Video.MapRenderingMode = Ortho Projection and Scene Rotation/Zoom
CenterCamera does a translate to the opposite of the X/Y/Z, such that the character is now centered at X/Y/Z in the middle of the screen.
Any thoughts? Maybe I've confused some of my code here a little?
I figure, ideally, the sun will be a fixed light source, not a vector, so the w/fourth index in the position vector should be 1.0f.
As far as anything happening over basic human scales is concerned, the Sun is infinitely far away. It's pretty much the definitive "directional light". So unless you are rendering across hundreds or thousands of miles, the differences in the direction of sunlight will be entirely irrelevant.
In short: you should use a directional light, just like everyone else.

What is Half vector in modern GLSL?

http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir2
reports that half vector in OpenGL context is 'Eye position - Light position' but then it goes on to say 'luckily OpenGL calculates it for us' [which is now deprecated].
How can, practically be calculated (a simple example would be greatly appreciated) [mainly, it puzzles me what "Eye" is and how it can be derived].
At the moment I managed to make specular calculations work (with good visual result) with half vector being equal to Light where Light is
vec3 Light = normalize(light_position - vec3(out_Vertex));
Now, I've no idea why that worked.
[If at least I knew what "Eye" is and how it can be derived practically.]
The half-vector is used in specular lighting and represents the normal at micro-imperfections in the surface which would cause incoming light to reflect toward the viewer. When the half-vector is closer to the surface normal, more imperfections align with the actual surface normal. Smoother surfaces will have fewer imperfections pointing away from the surface normal and result in a sharper highlight with a more significant drop off of light as the half-vector moves away from the actual normal than a rougher surface. The amount of drop off is controlled by the specular term, which is the power to which the cosine between the half-vector and normal vector is taken, so smoother surfaces have a higher power.
We call it the half-vector (H) because it is half-way between the vector point to the light (light vector, L) and the vector pointing to the viewer (which is the eye position (0,0,0) minus the vertex position in eye space; view vector, V). Before you calculate H, make sure the vector to the light and the eye are in the same coordinate space (legacy OpenGL used eye-space).
H = normalize( L + V )
You did the correct calculation, but your variables could be named more appropriately.
The term light_position here isn't entirely correct, since the tutorial you cited is the Directional light tutorial, which by definition, directional lights don't have position. The light vector for directional lights is independent of the vertex, so here, you have combined a few equations. Keep in mind, the light vector is toward the light, so opposite of the flow of photons from the light.
// i'm keeping your term here... I'm assuming
// you wanted to simulate a light coming from that position
// using a directional light, so the light direction would
// be -light_position, making
// L = -(-light_position) = light_position
vec3 L = light_position;
// For a point light, you would need the direction from
// the point to the light, so instead it would be
// light_position - outVertex
vec3 V = -out_Vertex;
// really it is (eye - vertexPosition), so (0,0,0) - out_Vertex
vec3 H = normalize(L + V);
In the fragment shader the vertex coordinates can be seen as a vector that goes from the camera (or the "eye" of the viewer) to the current fragment so by reversing the direction of this vector we then have the "Eye"-vector you are looking for. When calculating the Half-vector you also need to be aware of the direction of the Light-position vector, on the webpage you link they have that on pointing towards the surface, on http://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model its pointing away from the surface.