I have a struct QUAD that stores 4 pointers to 4 VECTOR3D (which contains 3 floats) so that I can draw the quad mesh.
From what I understand is whenever I draw a mesh, I need normal as well to properly light/shade a mesh and it's relatively easy when it's a mesh laying on a plain, using normal per face.
When I have 2 by 2 quad meshes laying on XZ coordinate and tried to raise it's centre (0,0,0) by a certain point, say (0, 4, 0) it would start to form real 3D shapes, then I need to calculate normals again. I'm having hard time understanding how and what is to be to calculated normals. As expected, the 3D shape shades like it's still a flat mesh, so it does not represent real shape. One of the explanation says I need to calculate normals per vertex instead of per face.
Does it mean I need to calculate normals for all corners of mesh? once i have normals what would i do? I was still using old glBegin glEnd methods but now I feel like i need to use DrawArray method. I'm deeply confused and I'm pretty sure I don't make much sound but i'd much appreciate your help.
If you need flat looking surface then your normals will be normals to the quad plane. If you need "soft looking" surface you need to blend(read this and watch this cool simple video) normals - that will add sort of gradient.
Related
Suppose that we have a triangle mesh without information about normals and texture coordinates.
(Basically an OBJ file with only vertices and face elements).
The objective is to show something decent using Opengl with a program written in C.
To calculate the normals of every triangle is easy...
But what about texture mapping?
Can anyone recommend me a simple algorithm/documentation/resource to map the normalized UV coordinates of an image to a generic mesh of triangles?
(For a mesh with a single triangle it is easy, ex: [0][0], [1][0], [0][1])
The result doesn't have to be perfect, even professional softwares can't do that without UV unwrapping and UV seams.
The only algorithm I know is for 2D screen coordinates (screen space):
I already answered a question similar to this here, focus on the algorithm (ie., texturePos = (vPos - 0.5) * 2) of conversion between textureCoords and 2D vertices
EDIT:
Note; The following is a theory:
There might be a method with 3D space. Eventually the transformations lead to the vertices being rendered in 2D screen coordinates.
local space --> world space --> view space --> NDC space --> screen coordinates
Using the general convention above and the 3 matrices (Model, View, Projection),
and since the vertices will end up in 2D space, you could create some form of algorithm to back track the textureCoordinates using the inverse Matrices back to 3D space and move on from there.
This, btw, still is not a defined and perfect algorithm (maybe there is and someone will edit and add the algorithm here in the future...)
Given a human 3D model, I want to change its shape by giving parameters, like height, waist, bust etc.
From what I gathered, the 3D model should have some 'hooks' around the areas I can change.
Any pointers for this would be very helpful through OpenGL, Three.js or any other means. I don't want to do it in Blender or other 3D manipulation tools. I want it done programatically.
Here's a Sample 3D model
What you should do is "tag" a group of vertices together.
Then apply a vertex shader to those groups, which changes the position of the vertices to shrink/expand the mesh.
One way to do this is to place a point inside the mesh, and give it a radius. This pretty much means you're creating a sphere.
Run the shader on all the vertices inside the sphere.
What the shader should do is "inflate" the sphere - moving the vertices away from the center point.
Just transform each vertice away from the center by a certain ammount.
(Make a vector from the center to the current vertice, continue the vector, and move the vertice there.
This should work well for the belly.
Another shader you can do is to stretch the mesh vertically (for the person's height).
This is more straightforward.
Just run on all vertices and add to their height.
How much to add - that's what you should figure out. My intuition says it can't be a constant - I think it's a linear function but I'm not sure.
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
Let's say I have this geometry:
glutSolidTeapot(1);
I want to slice it in 8 cubes, for example across the 3 planes (xy), (yz), (xz), to make a 3D puzzle.
How can I clip a geometry?
There is 2 ways of doing this. I'm going to assume you want to slice your geometry into cubes, but other shapes can be done similarly.
1. Slice your triangle mesh
Here you just loop through all your triangles and check in which cube the triangle belongs to. If the triangle intersects multiple cubes you need to split it in multiple triangles. You'll need to do some math for line-plane intersection to get the splits right, but its not very hard.
2. Use opengl clip planes
You can also render your geometry multiple times, but clip only the part you want to be shown on screen. This can be done using glClipPlane (see http://www.opengl.org/sdk/docs/man2/xhtml/glClipPlane.xml). For each cube you'll need 6 clip planes. This method will be slower than the first as gpu needs to consider each triangle for every cube.
I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.