any Idea howto flip a Mesh to generate an Object? [closed] - opengl

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I try to reduce a very complex Mesh (reduce the objectdata in file itself).
For example: A Human Body. I want to cut it in half, and save only the half mesh-data on disk (wavefront obj).
Now i want to read the Data, push it to a Renderlist, and than... mirror/double it by code.
But how? ;-) Is there a simple way to do this?
I searched SE and youtube, but found only stuff on flip normals.

Scale the mesh by -1 1 1 (to mirror through the x axis), and reverse the face winding via glFrontFace. For example in old school OpenGL:
drawObject();
glPushMatrix();
glScalef(-1, 1, 1);
glFrontFace(GL_CW);
drawObject();
glFrontFace(GL_CCW);
glPopMatrix();
If you are using shaders, the apply the local scaling to your mvp matrix. To mirror the model through the y axis, use a scale of 1 -1 1, and similarly a scale of 1 1 -1 for the z axis.

Related

What's the relationship between the barycentric coordinates of triangle in clip space and the one in screen space [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
Suppose I have a triangle say(PC0,PC1,PC2), and its barycentric coordinates is((1,0),(0,1),(0,0)), they are all in clip space.
And now I want calculate the interpolated barycentric coordinates in screen space, how can I do that so it could be correct? I have see something like perspective correct interpolation, but I cant find a good mapping relationship bewteen them.
The conversion from screen-space barycentric (b0,b1,b2) to clip-space barycentric (B0,B1,B2) is given by:
(b0/w0, b1/w1, b2/w2)
(B0, B1, B2) = -----------------------
b0/w0 + b1/w1 + b2/w2
Where (w0,w1,w2) are the w-coordinates of the three vertices of the triangle, as set by the vertex shader. (See How exactly does OpenGL do perspectively correct linear interpolation?).
The inverse transformation is given by:
(B0*w0, B1*w1, B2*w2)
(b0, b1, b2) = -----------------------
B0*w0 + B1*w1 + B2*w2

Creating accurate 3d information using opengl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 months ago.
Improve this question
I am interested in generating a 360 degree rendition of a real terrain model (a triangulated terrain) using OpenGL so that I can extract accurate 3D information in the way of depth, orientation(azimuth) and angle of elevation. That is, so that for each pixel I end up with accurate information about the angle of elevation, azimuth and depth as measured from the camera position. The 360 degree view would be 'stitch together' after the camera is rotated around. My questions is how accurate would the information be?
If I had a camera width of 100 pixels, a horizontal field of view of 45 degrees and rotated 8 times around, would each orientation (1/10th of degree) have the right depth and angle of elevation?
If this is not accurate due to projection, is there a way to adjust for any deviations?
Just as an illustration, the figure below shows a panoramic I created (not with OpenGL). The image has 3600 columns (one per 1/10th of a degree in azimuth where each column has the same angular unit), depth (in meters) and the elevation (not the angle of elevation). This was computed programmatically without OpenGL

Loading many images into OpenGL and rendering them to the screen [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?
Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

How to create multiple viewports where each viewport have different scenes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a mini-project for Computer Graphics, the idea of the project is simulation of gravity on different planets. I tried to find a tutorial for creating different viewports, each viewport has its own code individually so I can customize each viewport separately but I couldn't find it, most of the tutorials or examples about having different viewports of the same scene but from different angle of view.
The main vision in my mind is to split the screen into 3 parts, each part have a falling object where the acceleration of the falling object simulates the gravity acceleration on that planet.
Usually you have one glViewport call and your render you scene afterwards. To render two different scenes you just do that twice:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // this clears the whole window
glViewport(0, 0, 100, 100);
render_scene_zero();
glViewport(100, 0, 100, 100);
render_scene_one();
Here render_scene_zero and render_scene_one are responsible for drawing the respective scene as-if it was the only scene visible. They can draw completely different things, e.g. drawing a cube in scene zero and a sphere in scene one.

How to make low-res graphics with opengl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am writing using opengl in C and I want to make oldschool style graphics – like Star Fox for SNES. So I plan to have a 2D array (I'll figure out how, just talking pseudocode for now) of fragments that will represent the lower resolution (you can imagine just containing rbg color info). So I'm going to be writing my own code that makes the 3D world and rasterizes it into this 2D array (might try to get the GPU to help there). Does this even make sense? Are there better ways to make low-res 2D graphics using OpenGL?
Render scene to low-resolution FBO.
Stretch-blit FBO contents to screen using a textured quad or glBlitFramebuffer().