Othographic projection causing geometry to get clipped by the far plane - opengl

Let's say I have a rectangle that I want to render oriented 45 degrees about the x-axis to the camera. So it looks like this:
(clipped at the bottom only because it's the edge of the window).
As I shift this rectangle along it's local y-direction though (i.e. increase y on it's vertices before we rotate it about the x-axis) then it eventually gets clipped by the far plane:
How do I prevent this? It seems unnatural that the rectangle should be cut off at all when moved away from the viewport but still within actual view.
I do definitely want to render my rectangles orthographically.
I am quite new to OpenGL so I'm thinking I'm missing something here.

The trick it turns out is just to squish the z position of the vertices in the vertex shader. This way more vertices can fit inside the frustrum. This does distort the mesh but in orthographic projection this actually doesn't make a visible difference (at least in my situation).
I.e. inside the vertex shader do this:
gl_Position.z *= 0.5; // or whatever scale you want

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

(GLSL) Orienting raycast object to camera space

I'm trying to orient a ray-cast shape to a direction in world space. Here's a picture of it working correctly:
It works fine as long as the objects are pointing explicitly at the camera's focus point. But, if I pan the screen left or right, I get this:
So, the arrows are maintaining their orientation as if they were still looking at the center of the screen.
I know my problem here is that I'm not taking the projection matrix into account to tweak the direction the object is pointing. However, anything I try only makes things worse.
Here's how I'm drawing it: I set up these vertices:
float aUV=.5f;
Vertex aV[4];
aV[0].mPos=Vector(-theSize,-theSize,0);
aV[0].mUV=Point(-aUV,-aUV);
aV[1].mPos=Vector(theSize,-theSize,0);
aV[1].mUV=Point(aUV,-aUV);
aV[2].mPos=Vector(-theSize,theSize,0);
aV[2].mUV=Point(-aUV,aUV);
aV[3].mPos=Vector(theSize,theSize,0);
aV[3].mUV=Point(aUV,aUV);
// Position (in 3d space) and a point at direction are sent in as uniforms.
// The UV of the object spans from -.5 to .5 to put 0,0 at the center.
So the vertex shader just draws a billboard centered at a position I send (and using the vertex's position to determine the billboard size).
For the fragment shader, I do this:
vec3 aRayOrigin=vec3(0.,0.,5.);
vec3 aRayDirection=normalize(vec3(vertex.mUV,-2.5));
//
// Hit object located at 0,0,0 oriented to uniform_Pointat's direction
//
vec4 aHH=RayHitObject(aRayOrigin,aRayDirection,uniform_PointAt.xyz);
if (aHH.x>0.0) // Draw the fragment
I compute the final PointAt (in C++, outside of the shader), oriented to texture space, like so:
Matrix aMat=getWorldMatrix().Normalized();
aMat*=getViewMatrix().Normalized();
Vector aFinalPointAt=pointAt*aMat; // <- Pass this in as uniform
I can tell what's happening-- I'm shooting rays straight in no matter what, but when the object is near the edge of the screen, they should be getting tweaked to approach the object from the side. I've been trying to factor that into the pointAt, but whenever I add the projection matrix to that calculation, I get worse results.
How can I change this to make my arrows maintain their orientation even when perspective becomes an issue? Should I be changing the pointAt value, or should I be trying to tweak the ray's direction? And whichever is smarter-- what's the right way to do it?

OpenGL sheared near clipping plane

I have four arbitrary points (lt,rt,rb,lb) in 3d space and I would like these points to define my near clipping plane (lt stands for left-top, rt for right-top and so on).
Unfortunately, these points are not necessarily a rectangle (in screen space). They are however a rectangle in world coordinates.
The context is that I want to have a mirror surface by computing the mirrored world into a texture. The mirror is an arbitary translated and rotated rectangle in 3d space.
I do not want to change the texture coordinates on the vertices, because that would lead to ugly pixelisation when you e.g. look at the mirror from the side. When I would do that, also culling would not work correctly which would lead to huge performance impacts in my case (small mirror, huge world).
I also cannot work with the stencil buffer, because in some scenarios I have mirrors facing each other which would also lead to a huge performance drop. Furthermore, I would like to keep my rendering pipeline simple.
Can anyone tell me how to compute the according projection matrix?
Edit: Of cause I already have moved my camera accordingly. That is not the problem here.
Instead of tweaking the projection matrix (which I don't think can be done in the general case), you should define an additional clipping plane. You do that by enabling:
glEnable(GL_CLIP_DISTANCE0);
And then set gl_ClipDistance vertex shader output to be the distance of the vertex from the mirror:
gl_ClipDistance[0] = dot(vec4(vertex_position, 1.0), mirror_plane);

OpenGL - skybox and frustum clipping

I want to create skybox, which is just textured cube around camera. But actually i don't understand how this can work, because the camera viewing volume is frustum and the skybox is cube. According to this source:
http://www.songho.ca/opengl/gl_projectionmatrix.html
Note that the frustum culling (clipping) is performed in the clip
coordinates, just before dividing by wc. The clip coordinates, xc, yc
and zc are tested by comparing with wc. If any clip coordinate is less
than -wc, or greater than wc, then the vertex will be discarded.
vertexes of skybox faces should be clipped, if they are outside of frustum.
So it looks for me that the cube should be actually a frustum and should match exactly the gl frustum faces, so my whole scene is wrapped inside of that skybox, but i am sure this is bad. Is there any way how to fill whole screen with something, that wraps whole gl frustum?
The formulation from your link is rather bad. It is not actually vertices that get clipped, but rather fragments. Drawing a primitive with vertices completely off-screen does not prevent the fragments that would intersect with the screen from getting drawn. (The picture in the link also actually shows this being the case.)
That having been said, however, it may (or may not, depending on the design of your rendering code) be easier to simply draw a full-screen quad, and use the inverse of the projection matrix to calculate the texture coordinates instead.

Multiple view frustum clipping

The function gluPerspective() can be used to set near Z and far Z clipping planes.
I want to draw a scene clipped at a certain far Z plane,
and draw another scene beyond this Z plane.
Is it possible to do this clipping twice per frame?
There's no reason you shouldn't be able to do this.
Simply setup the first perspective, draw the first scene and then setup the second perspective and draw the seconds scene, all within the drawing of the same frame.
This is generally referred to as multi-pass rendering.
You might need to do a draw the farthest scene first and do a glClear(GL_DEPTH_BUFFER_BIT); before you draw the nearest scene.
A possibility is to assign different depth ranges for the scenes. Some pseudo code would be :
glDepthRange(0.5, 1.0)
draw_far_scene
glDepthRange(0.0, 0.5)
draw_near_scene
You have to setup your projection matrices to perform the proper clipping for the near / far scenes.
The depth ranges assignment is needed to prevent the depth buffer to 'merge' both renderings.