dynamic cubemaps - opengl - opengl

When static cubemaps are used, it is assumed that the objects at skybox texture are far away, so it is not problem that the view does not change when camera moves.
However, when using dynamic cubemap we include the objects near the camera. Thus, for example; we have giant glass in front of camera, and we have objects in front of glass and we need to calculate refraction. Because, we give only a vec3 to texture function in glsl, the coordinate on the glass is ignored. For example; the refraction vector at the middle of glass is vec3(0, -0.2, -0.6) and the refraction at the right bottom corner of glass is also vec3(0, -0.2, -0.6). Thus, the colors of both coordinates will be same, but it should not. How we can handle this problem?

To solve this issue you can use parallax corrected cubemaps, this allows you to create "local cubemaps" (with a reference bounding box) instead of "infinite cubemaps".
Seb Lagarde has a very nice article explaining it all in details (see the "Parallax correction for local cubemaps" chapter).

Related

Opengl Lighting Artifact (calculating light before translation)

I am trying to create a scene in opengl and I am having trouble with my lighting. I believe it is something to do with translating my models from the origin of the world into their respective places.
I only have 1 light in my scene placed on the right in the centre of the world, however you can see the light on the wall at the front of the scene.
I have wrote my own shaders. I suspect that I'm calculating the lighting too early as it seems that it is being calculated before the models are being translated around the world, that or I am using local coordinates rather than world coordinates (I think thats right anyway...).
(please ignore the glass, they are using a global light and a different shader)
Does anyone know if this is indeed the case or where would be the best place to find a solution.
Below is how I call rendering my models.
glUseProgram(modelShader);
//center floor mat
if (floorMat)
{
glUniformMatrix4fv(modelShader_modelMatrixLocation, 1, GL_FALSE, (GLfloat*)&(modelTransform.M));
floorMat->setTextureForModel(carpetTexture);
floorMat->renderTexturedModel();
}
https://www.youtube.com/watch?annotation_id=annotation_1430411783&feature=iv&index=86&list=PLRwVmtr-pp06qT6ckboaOhnm9FxmzHpbY&src_vid=NH68sIdF-48&v=P3DQXzyjswQ
Turns out I was not calculating the lighting in world space.
Rather than using the transposed modelworldposition I was just using the plain vertex position

opengl 3.3 z-fighting ortho 2d view

I'm having some issues with z fighting while drawing simple 2d textured quads using opengl. The symptoms are both objects moving at the same speed and one on top of another but periodically one can see through the other and vice versa - sort of like a "flickering". I assume this is indeed z fighting.
I have turned off Depth Testing and have the following as well:
gl.Disable(gl.DEPTH_TEST)
gl.DepthFunc(gl.LESS)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
My view and ortho matrices are as follows:
I have tried to set the near and far distances much greater ( like range of 50000 but still no help)
Projection := mathgl.Ortho(0.0, float32(width), float32(height), 0.0, -5.0, 5.0)
View := mathgl.LookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
The only difference with my opengl process is that instead of a drawelements call for each individual object, I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader.
Does anyone have remedies for 2d z fighting?
edit:
i'm adding some pictures to further describe the scenario:
These images are taken a few seconds apart from each other. They are simply texture moving from left to right. As they move; you see from the image, that one sprite over-lapse the other and vice versa back and forth etc very fast.
Also note that my images (sprites) are pngs that have a transparent background to them..
It definitely isn't depth fighting if you have depth testing disabled as shown in the code snippet.
"I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader." - You need to look into the order that you add your sprites. Perhaps it's inconsistent for some reason.
This could be Z fighting
the usual causes are:
fragments are at the same Z-coordinate or closer then accuracy of Z-coordinate
fragments are too far from perspective camera with perspective projection the more far you are from Z near the less accuracy
some ways to fix this:
change size/position of overlapped surfaces slightly
use more bits for Z-Buffer (Depth)
use linear or logarithmic Z-buffer
increase Z-near or decrease Z-far or both for perspective projection you can combine more frustrums to get high definition Z range
sometimes helps to use glDepthFunc(GL_LEQUAL)
This could be an issue with Blending.
as you use Blending you need to render a bit differently. To render transparency correctly you must Z-sort the scene otherwise artifacts can occur. If you got too much dense geometry of transparent objects or objects near them (many polygon edges near). In addition Z-fighting creates a magnitude higher artifacts with blending.
some ways to fix this:
Z sorting can be partially done by multi pass rendering + Depth test + switching front face
so first render all solids and then render Z-sorted transparent objects with front face set to the side not facing camera. Then render the same objects with front face set for side facing camera. You need to use depth test for this!!!. This way you do not need to sort all polygons of scene just the transparent objects. Results are not 100% correct for complex transparent geometries but the results are usually good enough (especially for dynamic scenes). This is how the output from this looks like
it is a glass cup a bit messed up visually by selected blending function for this case because darker pixels means 2 layers of glass on purpose it is not a bug. Therefore the opening looks like the front/back faces are swapped
use less dense geometry for transparent objects
get rid of Z-fighting issues

Adding sunlight to a scene

I am rendering some interactive scene in 3D and I am wondering: How do I add sunlight to it? I'll try to explain the best how I have it setup now.
What you see right now is that the directional light (the sun) is denoted by the yellow dot, which I want to replace with a realistic sunlight.
The current order of drawing is:
For all objects:
Do a light depth pass for the shadow.
Then for all objects:
Do a draw pass for the object itself, using the light depth texture.
Where would I add adding a realistic sunlight? I have a few ideas though about it:
After the current drawing order, save the output into a texture, and use a shader that takes the texture and adds sunlight on top of it.
After the current drawing order, use a shader that adds the sunlight simply to what has been drawn so far, such that it will be drawn after everything is on the screen.
Or maybe draw the sunlight before the rest of the scene gets drawn?
How would you deal with rendering a nice sunlight that represents a real life sun?
To realistically simulate sunlight, you probably need to implement some form of global illumination. A lot of the lighting we see on objects comes not directly from the light source, but from light bounced off of other objects. Global illumination simulates the bounced light.
[Global Illumination] take[s] into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination).
Another techniques that may not be physically accurate, but gives "nice" looking results is Ambient Occlusion:
ambient occlusion is used to represent how exposed each point in a scene is to ambient lighting. So the enclosed inside of a tube is typically more occluded (and hence darker) than the exposed outer surfaces; and deeper inside the tube, the more occluded (and darker) it becomes.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

Doubts in RayTracing with GLSL

I am trying to develop a basic Ray Tracer. So far i have calculated intersection with a plane and blinn-phong shading.i am working on a 500*500 window and my primary ray generation code is as follows
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f ?
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
There's no right or wrong with projections. You could as well map viewport pixels to azimut and elevation angle. Actually your way of doing this is not so bad at all. I'd just pass the viewport dimensions in a additional uniform, instead of hardcoding, and normalize the vector. The Z component literally works like focal lengths.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f?
Raytracing works on a global description containing the full scene. OpenGL primitives however are purely local, i.e. just individual triangles, lines or points, and OpenGL doesn't maintain a scene database. So geometry passed using the usual OpenGL drawing function can not be raytraced (at least not that way).
This is about the biggest obstacle for doing raytracing with GLSL: You somehow need to implement a way to deliver the whole scene as some freely accessible buffer.
It is possible to use Ray Marching to render certain types of complex scenes in a single fragment shader. Here are some examples: (use Chrome or FireFox, requires WebGL)
Gift boxes: http://glsl.heroku.com/e#820.2
Torus Journey: http://glsl.heroku.com/e#794.0
Christmas tree: http://glsl.heroku.com/e#729.0
Modutropolis: http://glsl.heroku.com/e#327.0
The key to making this stuff work is writing "distance functions" that tell the ray marcher how far it is from the surface of an object. For more info on distance functions, see:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm