Creating accurate 3d information using opengl? [closed] - opengl

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 months ago.
Improve this question
I am interested in generating a 360 degree rendition of a real terrain model (a triangulated terrain) using OpenGL so that I can extract accurate 3D information in the way of depth, orientation(azimuth) and angle of elevation. That is, so that for each pixel I end up with accurate information about the angle of elevation, azimuth and depth as measured from the camera position. The 360 degree view would be 'stitch together' after the camera is rotated around. My questions is how accurate would the information be?
If I had a camera width of 100 pixels, a horizontal field of view of 45 degrees and rotated 8 times around, would each orientation (1/10th of degree) have the right depth and angle of elevation?
If this is not accurate due to projection, is there a way to adjust for any deviations?
Just as an illustration, the figure below shows a panoramic I created (not with OpenGL). The image has 3600 columns (one per 1/10th of a degree in azimuth where each column has the same angular unit), depth (in meters) and the elevation (not the angle of elevation). This was computed programmatically without OpenGL

Related

Loading many images into OpenGL and rendering them to the screen [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?
Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

Field of view for uncentered, distorted image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Consider the following diagram and equations representing a pinhole camera:
Suppose the image size is W times H pixels, and that there is no nonlinear distortion. To compute the field of view I proceed as in the picture below:
where \tilde{H} is the image width in the image plane, not in the pixel coordinates, and s_y is the height of a pixel in the image plane units.
In an exercise I'm told to account for the fact that the principal point might not be in the image center.
How could this happen, how do we correct the FOV in this case?
Moreover, suppose the image was distorted as follows, before being projected on the pixel coordinates:
How do we account for the distortion in the FOV? How is it even defined?
The principal point may not be centered in the image for a variety of reasons, for example, the lens may be slightly decentered due to the mechanics of the mount, or the image may have been cropped.
To compute the FOV with a decentered principal point you just redo your computation separately for the angles to the left and right sides of the focal axis (for the horizontal FOV, above and below for the vertical), and add the angles up.
The FOV is defined exactly in the same way, as the angle between the light rays that project to left and right extrema of the image image row containing the principal point. To compute it you need to first undistort those pixel coordinates. For ordinary photographic lenses, where the barrel term dominates the distortion, the result is a slightly larger FOV than what you compute ignoring the distortion. Note also that, due to the nonlinearitiy of the distortion, the horizontal, vertical and diagonal FOV's are not simply related through the image aspect ratio when the distortion is taken into account.

any Idea howto flip a Mesh to generate an Object? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I try to reduce a very complex Mesh (reduce the objectdata in file itself).
For example: A Human Body. I want to cut it in half, and save only the half mesh-data on disk (wavefront obj).
Now i want to read the Data, push it to a Renderlist, and than... mirror/double it by code.
But how? ;-) Is there a simple way to do this?
I searched SE and youtube, but found only stuff on flip normals.
Scale the mesh by -1 1 1 (to mirror through the x axis), and reverse the face winding via glFrontFace. For example in old school OpenGL:
drawObject();
glPushMatrix();
glScalef(-1, 1, 1);
glFrontFace(GL_CW);
drawObject();
glFrontFace(GL_CCW);
glPopMatrix();
If you are using shaders, the apply the local scaling to your mvp matrix. To mirror the model through the y axis, use a scale of 1 -1 1, and similarly a scale of 1 1 -1 for the z axis.

How to draw ortho axes for object in perspective projection using modern OpenGL? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have 3D scene with perspective projection. Also I can select an object on the scene. I need to draw axes for selected object. The problem is the axes don't save their size in perspective projection. If object is far from the eye (camera), axes is going be small too.
How to draw axes with the same size regardless of the position of the eye (camera)?
There are two ways to achieve this:
Shader only approach
When looking at the perspective projection, the size change according to depth is caused by the perspective divide (dividing all components by w). If you want to prevent this from happening, you can multiply the x and y coordinate of the projected vertices with the w-coordinate which will cancel out the perspective divide. It's a bit tricky to do because the correction before all other transformations, but something along this line should work for the general case:
vec4 ndc = MVP * vec4(pos, 1);
float sz = ndc.w;
gl_Position= MVP * vec4(pos.xy * sz, pos.z, 1);
Drawback: Needs a specialized shader
CPU approach
The other option is to render the axis with a orthographic projection, while calculating the location where it has to be placed on the CPU.
This can, for example, be done by projection the target location with the perspective projection, perform the perspective divide. The resulting x,y components give the location in screen-space where the axis have to be placed.
Now use this position to render the axis with orthographic projection which will maintain the sizes no matter how far away the axis are.
Drawbacks: With this approach depth values might not be compatible with the perspective projected part of the scene.

Distance to points on a cube and calculating a plane given a normal and a position [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have two questions that have been very lacking in answers on Google.
My first question - generating planes
I am trying to calculate the 4 vertices for a finite plane based on a provided normal, a position, and a radius. How can I do this? An example of some pseudo-code or a description of an algorithm to produce the 4 vertices of a finite plane would be much appreciated.
Furthermore, it would be useful to know how to rotate a plane with an arbitrary normal to align with another plane, such that their normals are the same, and their vertices are aligned.
My second question - distance to points on a cube
How do I calculate the distance to a point on the surface of a cube, given a vector from the centre of the cube?
This is quite hard to explain, and so my google searches on this have been hard to phrase well.
Basically, I have a cube with side length s. I have a vector from the centre of the cube v, and I want to know the distance from the centre of the cube to the point on the surface that that vector points to. Is there a generalised formula that can tell me this distance?
An answer to either of these would be appreciated, but a solution to the cube distance problem is the one that would be more convenient at this moment.
Thanks.
Edit:
I say "finite plane", what I mean is a quad. Forgive me for bad terminology, but I prefer to call it a plane, because I am calculating the quad based on a plane. The quad's vertices are just 4 points on the surface of the plane.
Second Question:
Say your vector is v=(x,y,z)
So the point where it hits the cube surface is the point where the largest coordinate in absolute value equals s, or mathematically:
(x,y,z) * (s/m)
where
m = max{ |x| , |y| , |z| }
The distance is:
|| (x,y,z) * (s/m) || = sqrt(x^2 + y^2 + z^2) * (s/max{ |x| , |y| , |z| })
We can also formulate the answer in norms:
distance = s * ||v||_2 / ||v||_inf
(These are the l2 norm and the l-infinity norm)