OpenGL Lighting and Positioning - opengl

I apologize, because this seems like a pretty straightforward problem that should be obvious, but it appears that I'm clueless.
I'm writing an engine and using Light 0 as the "sun" for the scene. I figure, ideally, the sun will be a fixed light source, not a vector, so the w/fourth index in the position vector should be 1.0f.
I setup the scene's Ortho viewpoint, then setup the light to be at the position of the character (x/y are coordinates of the plane terrain, with a positive z facing the camera and indicating "height" on the terrain -- the scene is also rotated for an isometric view on the x axis).
The character is always centered in the screen -- but I've noticed that for some reason, the light seems to shine at a position between the character and world coordinates 0,0,0. The further the character gets from world coordinates 0,0,0, the larger/wider the light is. If the character is at 0,0,0, the light is very small. If the character moves to something like 0,200,0, the light is HUGE.
I'm just trying to get my feet wet and have a "light" that follows the character (and then adjust for position, etc, later, to create a sun).
Graphics.BeginRenderingLayer();
{
Video.MapRenderingMode();
Graphics.BeginLightingLayer( Graphics.AmbientR, Graphics.AmbientG, Graphics.AmbientB, Graphics.DiffuseR, Graphics.DiffuseG, Graphics.DiffuseB, pCenter.X, pCenter.Y, pCenter.Z );
{
Graphics.BeginRenderingLayer();
{
Graphics.CenterCamera( pCenter.X, pCenter.Y, pCenter.Z );
RenderMap( pWorld, pCenter, pCoordinate );
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer();
{
Graphics.DrawMan( pCenter );
}
Graphics.EndRenderingLayer();
}
Graphics.EndLightingLayer();
}
Graphics.EndRenderingLayer();
Graphics.BeginRenderingLayer = PushMatrix, EndRenderingLayer = PopMatrix
Video.MapRenderingMode = Ortho Projection and Scene Rotation/Zoom
CenterCamera does a translate to the opposite of the X/Y/Z, such that the character is now centered at X/Y/Z in the middle of the screen.
Any thoughts? Maybe I've confused some of my code here a little?

I figure, ideally, the sun will be a fixed light source, not a vector, so the w/fourth index in the position vector should be 1.0f.
As far as anything happening over basic human scales is concerned, the Sun is infinitely far away. It's pretty much the definitive "directional light". So unless you are rendering across hundreds or thousands of miles, the differences in the direction of sunlight will be entirely irrelevant.
In short: you should use a directional light, just like everyone else.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

Opengl Lighting Artifact (calculating light before translation)

I am trying to create a scene in opengl and I am having trouble with my lighting. I believe it is something to do with translating my models from the origin of the world into their respective places.
I only have 1 light in my scene placed on the right in the centre of the world, however you can see the light on the wall at the front of the scene.
I have wrote my own shaders. I suspect that I'm calculating the lighting too early as it seems that it is being calculated before the models are being translated around the world, that or I am using local coordinates rather than world coordinates (I think thats right anyway...).
(please ignore the glass, they are using a global light and a different shader)
Does anyone know if this is indeed the case or where would be the best place to find a solution.
Below is how I call rendering my models.
glUseProgram(modelShader);
//center floor mat
if (floorMat)
{
glUniformMatrix4fv(modelShader_modelMatrixLocation, 1, GL_FALSE, (GLfloat*)&(modelTransform.M));
floorMat->setTextureForModel(carpetTexture);
floorMat->renderTexturedModel();
}
https://www.youtube.com/watch?annotation_id=annotation_1430411783&feature=iv&index=86&list=PLRwVmtr-pp06qT6ckboaOhnm9FxmzHpbY&src_vid=NH68sIdF-48&v=P3DQXzyjswQ
Turns out I was not calculating the lighting in world space.
Rather than using the transposed modelworldposition I was just using the plain vertex position

Point Light not rendering

I'm trying to render a couple point lights in my scene, but having trouble getting the actual lights to illuminate. The only light I got to work is a directional light which lights up the scene initially:
Where the two rocks are displayed are the same positions as the two point lights. I've toyed around with the diffuse, color and attenuation values, but got the same results. But when changed the ambiance of either light, the color changed:
My calculations within GLSL are correct and my uniforms are read correctly as well. But somehow I lost my diffuse intensity. m_pointLight[0].DiffuseIntensity doesn't registers a change, but if the commented AmbientIntensity is used, the scene is tinted red.
m_scale += 0.0057f;
// changing the value makes scene red
m_pointLight[0].AmbientIntensity = 0.5f;
// changing the value does nothing
m_pointLight[0].DiffuseIntensity = 1.5f;
// light color is red
m_pointLight[0].Color = glm::vec3(1.0f, 0.0f, 0.0f);
// position of light (same as rock)
m_pointLight[0].Position = glm::vec3(3.0f, 1.0f, FieldDepth * (cosf(m_scale) + 1.0f) / 2.0f);
// Light decay
m_pointLight[0].Attenuation.Linear = 0.01f;
Not a real answer but I figured out part of what my problem was:
The problem was the angle I was looking from. Something is quite wrong with my code. I set my camera position to (0, 100, 75) originally. After moving the a small block of code to where I I set my perspective and other uniforms, a bit of light started to show from the point light. The problem then was when the two rocks moved, closer to the center of the floor, the point light of the first rock started to glow red. Same goes for the second rock, but it was set to blue. But as the rocks moved closer to the camera, the colors faded gradually until it appeared off. Then when it moved back to the center, the colors gradually got brighter to full capacity.
So then I was thinking this may have to do with the angle that I'm at. So from the camera position (0, 100, 75), I decided to look from the complete opposite end of the floor (-75 for the z-axis). And this is what I got:
The only problem remaining is that depending on where the camera is, the point light closest to the camera doesn't show (other than on the underside of the closest rock). The bottom right image is the only one displaying all lights correctly:
If some could explain what is going on, it would be greatly appreciated.

How do I implement basic camera operations in OpenGL?

I'm trying to implement an application using OpenGL and I need to implement the basic camera movements: orbit, pan and zoom.
To make it a little clearer, I need Maya-like camera control. Due to the nature of the application, I can't use the good ol' "transform the scene to make it look like the camera moves". So I'm stuck using transform matrices, gluLookAt, and such.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt), but I'm not quite sure how to implement the other two, pan and orbit. Has anyone ever done this?
I can't use the good ol' "transform the scene to make it look like the camera moves"
OpenGL has no camera. So you'll end up doing exactly this.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt),
This is not a Zoom, this is a Dolly. Zooming means varying the limits of the projection volume, i.e. the extents of a ortho projection, or the field of view of a perspective.
gluLookAt, which you've already run into, is your solution. First three arguments are the camera's position (x,y,z), next three are the camera's center (the point it's looking at), and the final three are the up vector (usually (0,1,0)), which defines the camera's y-z plane.*
It's pretty simple: you just glLoadIdentity();, call gluLookAt(...), and then draw your scene as normally. Personally, I always do all the calculations in the CPU myself. I find that orbiting a point is an extremely common task. My template C/C++ code uses spherical coordinates and looks like:
double camera_center[3] = {0.0,0.0,0.0};
double camera_radius = 4.0;
double camera_rot[2] = {0.0,0.0};
double camera_pos[3] = {
camera_center[0] + camera_radius*cos(radians(camera_rot[0]))*cos(radians(camera_rot[1])),
camera_center[1] + camera_radius* sin(radians(camera_rot[1])),
camera_center[2] + camera_radius*sin(radians(camera_rot[0]))*cos(radians(camera_rot[1]))
};
gluLookAt(
camera_pos[0], camera_pos[1], camera_pos[2],
camera_center[0],camera_center[1],camera_center[2],
0,1,0
);
Clearly you can adjust camera_radius, which will change the "zoom" of the camera, camera_rot, which will change the rotation of the camera about its axes, or camera_center, which will change the point about which the camera orbits.
*The only other tricky bit is learning exactly what all that means. To clarify, because the internet is lacking:
The position is the (x,y,z) position of the camera. Pretty straightforward.
The center is the (x,y,z) point the camera is focusing at. You're basically looking along an imaginary ray from the position to the center.
Now, your camera could still be looking any direction around this vector (e.g., it could be upsidedown, but still looking along the same direction). The up vector is a vector, not a position. It, along with that imaginary vector from the position to the center, form a plane. This is the camera's y-z plane.