What's the relationship between the barycentric coordinates of triangle in clip space and the one in screen space [closed] - opengl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
Suppose I have a triangle say(PC0,PC1,PC2), and its barycentric coordinates is((1,0),(0,1),(0,0)), they are all in clip space.
And now I want calculate the interpolated barycentric coordinates in screen space, how can I do that so it could be correct? I have see something like perspective correct interpolation, but I cant find a good mapping relationship bewteen them.

The conversion from screen-space barycentric (b0,b1,b2) to clip-space barycentric (B0,B1,B2) is given by:
(b0/w0, b1/w1, b2/w2)
(B0, B1, B2) = -----------------------
b0/w0 + b1/w1 + b2/w2
Where (w0,w1,w2) are the w-coordinates of the three vertices of the triangle, as set by the vertex shader. (See How exactly does OpenGL do perspectively correct linear interpolation?).
The inverse transformation is given by:
(B0*w0, B1*w1, B2*w2)
(b0, b1, b2) = -----------------------
B0*w0 + B1*w1 + B2*w2

Related

Creating accurate 3d information using opengl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 months ago.
Improve this question
I am interested in generating a 360 degree rendition of a real terrain model (a triangulated terrain) using OpenGL so that I can extract accurate 3D information in the way of depth, orientation(azimuth) and angle of elevation. That is, so that for each pixel I end up with accurate information about the angle of elevation, azimuth and depth as measured from the camera position. The 360 degree view would be 'stitch together' after the camera is rotated around. My questions is how accurate would the information be?
If I had a camera width of 100 pixels, a horizontal field of view of 45 degrees and rotated 8 times around, would each orientation (1/10th of degree) have the right depth and angle of elevation?
If this is not accurate due to projection, is there a way to adjust for any deviations?
Just as an illustration, the figure below shows a panoramic I created (not with OpenGL). The image has 3600 columns (one per 1/10th of a degree in azimuth where each column has the same angular unit), depth (in meters) and the elevation (not the angle of elevation). This was computed programmatically without OpenGL

Loading many images into OpenGL and rendering them to the screen [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?
Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

any Idea howto flip a Mesh to generate an Object? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I try to reduce a very complex Mesh (reduce the objectdata in file itself).
For example: A Human Body. I want to cut it in half, and save only the half mesh-data on disk (wavefront obj).
Now i want to read the Data, push it to a Renderlist, and than... mirror/double it by code.
But how? ;-) Is there a simple way to do this?
I searched SE and youtube, but found only stuff on flip normals.
Scale the mesh by -1 1 1 (to mirror through the x axis), and reverse the face winding via glFrontFace. For example in old school OpenGL:
drawObject();
glPushMatrix();
glScalef(-1, 1, 1);
glFrontFace(GL_CW);
drawObject();
glFrontFace(GL_CCW);
glPopMatrix();
If you are using shaders, the apply the local scaling to your mvp matrix. To mirror the model through the y axis, use a scale of 1 -1 1, and similarly a scale of 1 1 -1 for the z axis.

Why taking cross product of tangent vectors gives the normal vector? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm having a depth image, say z = f(x, y) with (x, y) in pixels. I want to calculate the normal vector at each pixel to create the normal map.
I've been trying the approach at Calculate surface normals from depth image using neighboring pixels cross product. In this approach, basically the "input" to the cross product is two tangent vectors, which are the gradients (dz/dx) and (dy/dx). These two tangent vectors will form the tangent plane at point (x, y, f(x, y)), and then the cross product will find the normal vector to this plane.
However, I don't understand why the normal vector to this plane (3D plane (x, y, f(x, y)) will also be the normal vector to the plane in world coordinates that I'm trying to find. Is there any assumption here? How can this approach be used to find the normal vector at each pixel?
That is almost by definition of the normal to a surface. The normal is locally perpendicular to the tangent plane. And the cross-product of two non-collinear vectors is a vector that is perpendicular to the two vectors. That is why the cross-product of two non-collinear vectors of the tangent plane is perpendicular to that tangent plane. And thus it is along the normal direction.

Distance to points on a cube and calculating a plane given a normal and a position [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have two questions that have been very lacking in answers on Google.
My first question - generating planes
I am trying to calculate the 4 vertices for a finite plane based on a provided normal, a position, and a radius. How can I do this? An example of some pseudo-code or a description of an algorithm to produce the 4 vertices of a finite plane would be much appreciated.
Furthermore, it would be useful to know how to rotate a plane with an arbitrary normal to align with another plane, such that their normals are the same, and their vertices are aligned.
My second question - distance to points on a cube
How do I calculate the distance to a point on the surface of a cube, given a vector from the centre of the cube?
This is quite hard to explain, and so my google searches on this have been hard to phrase well.
Basically, I have a cube with side length s. I have a vector from the centre of the cube v, and I want to know the distance from the centre of the cube to the point on the surface that that vector points to. Is there a generalised formula that can tell me this distance?
An answer to either of these would be appreciated, but a solution to the cube distance problem is the one that would be more convenient at this moment.
Thanks.
Edit:
I say "finite plane", what I mean is a quad. Forgive me for bad terminology, but I prefer to call it a plane, because I am calculating the quad based on a plane. The quad's vertices are just 4 points on the surface of the plane.
Second Question:
Say your vector is v=(x,y,z)
So the point where it hits the cube surface is the point where the largest coordinate in absolute value equals s, or mathematically:
(x,y,z) * (s/m)
where
m = max{ |x| , |y| , |z| }
The distance is:
|| (x,y,z) * (s/m) || = sqrt(x^2 + y^2 + z^2) * (s/max{ |x| , |y| , |z| })
We can also formulate the answer in norms:
distance = s * ||v||_2 / ||v||_inf
(These are the l2 norm and the l-infinity norm)