Render plane on a glClipPlane - opengl

How can I render a plane on the surface of a glClipPlane clipping plane? The plane is rendered by drawing a polygon between a series of points that are located on the plane. Right now, it's producing some very fun stitching.
I assume glPolygonOffset won't help me here?
It is possible for me to just translate the plane a tiny bit over to one side of the plane, but I would prefer a simpler and more elegant solution, if one exists.

I assume you are not using GLSL? If you have GLSL 1.30, you can just set a vertex's gl_ClipDistance [N] >= 0.0 for clip plane N and the point will not be clipped. If you do that for every vertex in the polygon you want to span your plane, the polygon will not be clipped (against that plane anyway).
As for glPolygonOffset (...) that affects the depth computed during rasterization. It happens after clipping, and likewise clipping has nothing to do with polygon depth, so you are correct it will not help. You will have to translate your vertices before/during primitive assembly for this to work, so that either means performing translation in a vertex shader or using the fixed-function modelview matrix.

Related

Opengl glclipdistance vs glclipplane

What is difference between gl clipdistance and clipplane?
Which will be effective for clipping? Is clip plane is contrained to normalised device coordinates ie. From -0.1 to 1.0.
Assuming you are refering to the OpenGL method glClipPlane and the glsl variable gl_ClipDistance: Those two are not directly related.
glClipPlane controls clipping planes in the fixed-function pipeline and is deprecated.
gl_ClipDistance is the modern version and is set from inside the shader. It has to contain the distance of the current vertex to a clip-plane. OpenGL does in this case not know anything about the clipplane itself, since the only relevant value are the distances to this planes.
The values of the plane (in both cases) are technically not constrained to any range, but in practice only planes intersecting the [-1, 1] cube will have any effect, since clipping against the unit cube still happens.

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

OpenGL - skybox and frustum clipping

I want to create skybox, which is just textured cube around camera. But actually i don't understand how this can work, because the camera viewing volume is frustum and the skybox is cube. According to this source:
http://www.songho.ca/opengl/gl_projectionmatrix.html
Note that the frustum culling (clipping) is performed in the clip
coordinates, just before dividing by wc. The clip coordinates, xc, yc
and zc are tested by comparing with wc. If any clip coordinate is less
than -wc, or greater than wc, then the vertex will be discarded.
vertexes of skybox faces should be clipped, if they are outside of frustum.
So it looks for me that the cube should be actually a frustum and should match exactly the gl frustum faces, so my whole scene is wrapped inside of that skybox, but i am sure this is bad. Is there any way how to fill whole screen with something, that wraps whole gl frustum?
The formulation from your link is rather bad. It is not actually vertices that get clipped, but rather fragments. Drawing a primitive with vertices completely off-screen does not prevent the fragments that would intersect with the screen from getting drawn. (The picture in the link also actually shows this being the case.)
That having been said, however, it may (or may not, depending on the design of your rendering code) be easier to simply draw a full-screen quad, and use the inverse of the projection matrix to calculate the texture coordinates instead.

OpenGL/GLUT - Project ModelView Coordinate to Texture Matrix

Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.

Multiple view frustum clipping

The function gluPerspective() can be used to set near Z and far Z clipping planes.
I want to draw a scene clipped at a certain far Z plane,
and draw another scene beyond this Z plane.
Is it possible to do this clipping twice per frame?
There's no reason you shouldn't be able to do this.
Simply setup the first perspective, draw the first scene and then setup the second perspective and draw the seconds scene, all within the drawing of the same frame.
This is generally referred to as multi-pass rendering.
You might need to do a draw the farthest scene first and do a glClear(GL_DEPTH_BUFFER_BIT); before you draw the nearest scene.
A possibility is to assign different depth ranges for the scenes. Some pseudo code would be :
glDepthRange(0.5, 1.0)
draw_far_scene
glDepthRange(0.0, 0.5)
draw_near_scene
You have to setup your projection matrices to perform the proper clipping for the near / far scenes.
The depth ranges assignment is needed to prevent the depth buffer to 'merge' both renderings.