How to reflect a chrome sphere in a scene with a procedural texture - opengl

My scene background is a procedural texture that draws an ocean, or a lava floor, or some such other background. It extends completely under as well, as if you were inside a cubemap. It would be easier if I could assume the view was the same in all directions, but if there's a sun, for example, you cannot.
Now if I wanted to put a chrome sphere in the middle, what does it reflect? Does the sphere see the same thing as the main camera does?
Assume it's expensive to render the background, and I do not want to do it multiple times per frame. I can save a copy to use in the reflection if that helps.
Can someone suggest a general approach? Here's an example of the procedural texture I mean (this is all in the shader, no geometry other than a quad):
https://www.shadertoy.com/view/XtS3DD

To answer your first question: In the real world, the reflection you see in the sphere depends on both the position of the camera, and the position of the sphere itself. However, taking both positions into account is prohibitively expensive for a moving sphere when using cube mapping (the most common approach), since you have to re-render all six faces of the cubemap with each frame. Thus, most games "fake" reality by using a cubemap that is centered about the origin ((0, 0, 0) in world-space) and only rendering static objects (trees, etc.) into the cube map.
Since your background is entirely procedural, you can skip creating cubemap textures. If you can define your procedural background texture as function of direction (not position!) from the origin, then you can use normal vector of each point on the sphere, plus the sphere's position, plus the camera position to sample from your background texture.
Here's the formula for it, using some glsl pseudocode:
vec3 N = normal vector for point on sphere
vec3 V = position of camera
vec3 S = position of point on sphere
vec3 ray = normalize(reflect(V-S,N));
// Reflect the vector pointing from the a point on the sphere to
// the camera over the normal vector for the sphere.
vec4 color = proceduralBackgroundFunc(ray);
Above, color is the final output of the shader for point S on the sphere's surface.
Alternatively, you can prerender the background into a cube texture, and then sample from it like so (changing only the last line of code from above):
vec4 color = texture(cubeSample,ray);

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

Bypass classical deferred shading light volumes

I would like to "bypass" the classical light volume approach of deferred lighting.
Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.
I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.
So the main problem is to know which fragment will have to be discarded.
How could I do that:
Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius.
Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points.
And I can build my sphere equation.
Finally I can know if the line intersect the sphere.
Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?
Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.
So what?
The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.
So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.
Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.
As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.
However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:
Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.
Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:
float A = dot(cam_position, cam_position);
float B = -2 * (dot(cam_position, cam_sphere_center);
float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
float Discriminant = (B * B) - 4 * A * C;
If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.

HLSL fixed lighting location

Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...
Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck

Curved Frosted Glass Shader?

Well making something transparent isn't that difficult, but i need that transparency to be different based on an object's curve to make it look like it isn't just a flat object. Something like the picture below.
The center is more transparent than the sides of the cylinder, it is more black which is the background color. Then there is the bezel which seems to have some sort of specular lighting at the top to make it more shiny, but i'd have no idea how to go about that transparency in that case. Using the normals of the surface relative to the eye position to determine the transparency value? Any help would be appreciated.
(moved comments into answer and added some more details)
Use (Sub Surface) scattering instead of transparency.
You can simplify things a lot for example by assuming the light source is constant along whole surface/volume ... so you need just the view ray integration not the whole volume integral per ray... I do it in my Atmospheric shader and it still looks pretty awesome almost indistinguisable from the real thing see some newer screenshots ... have compared it to the photos from Earth and Mars and the results where pretty close without any REALLY COMPLICATED MATH.
There are more options how to achieve this:
Voxel map (volume rendering)
It is easy to implement scattering into volume render engine but needs a lot of memory and power.
use 2 depth buffers (front and back face)
this need 2 passes with Cull face on and CW/CCW settings. This is also easy to implement but this can not handle multiple objects in the same view along Z axis of camera view. The idea is to pass both depth buffers to shader and integrating the pixel rays along its path cumulating/absorbing light from light source. Something like this:
render geometry to both depth buffers as 2 textures.
render quad covering whole screen
for each fragment compute the ray line (green)
compute the intersection points in booth depth buffers
obtain 'length,ang'
integrate along the length using scattering to compute pixel color
I use something like this:
vec3 p,p0,p1; // p0 front and p1 back face ray/depth buffer intersection points
int n=16; // integration steps
dl=(p1-p0)/float(n); // integration step vector
vec3 c=background color;
float q=dot(normalize(p1-p0),light)=fabs(cos(ang)); // normal light shading
for (p=p1,i=0;i<n;p1-=dp,i++) // p = p1 -> p0 path through object
{
b=B0.rgb*dl; // B0 is saturated color of object
c.r*=1.0-b.r; // some light is absorbed
c.g*=1.0-b.g;
c.b*=1.0-b.b;
c+=b*q; // some light is scattered in
} // here c is the final fragment color
After/durring the integration you should normalize the color ... so that the resulting color is saturated around the real view depth of the rendered material. for more informatio see the Atmospheric scattering link below (this piece of code is extracted from it)
analytical object representation
If you know the surface equation then you can compute the light path intersections inside shader without the need for depth buffers or voxel map. This Simple GLSL Atmospheric shader of mine uses this approach as ellipsoids are really easily handled this way.
Ray tracer
If you need precision and can not use Voxel maps then you can try ray-tracing engines instead. But all scattering renderers/engines (#1,#2,#3 included) are ray tracers anyway... As you can see all techniques discussed here are the same the only difference is the method of obtaining the ray/object boundary intersection points.