Step : Find 68 Landmarks on a 2D image (with dlib)
So i know all 68 Coordinates of each landmark!
Create a 3D mask of a generical face (with OpenGL) -> Result
I know all the 3d Coordinates of the face model as well!
Now i want to use this Tutorial to texture map all triangles from the 2d image to the 3D generic Facemodel
Does anyone know an answer of my problem ? If you need more information just give me a message and i will send you what you need. Thanks everybody!
EDIT: After finding this tutorial i changed the size of my picture to get a width and a height which is power of two.
And then a divide all my picture coords (landmarks)with the size:
landmark(x) / height and landmark(y) / width
Picture :
Result:
As bigger the width and the height is as better is the image definition!
What you're seeing looks like you passed all your vertices directly to glDrawArrays without any reuse. So each vertex is used for a single triangle in your result, rather than being used in 6 or more triangles in the original picture.
You need to use an element buffer to describe how all your triangles are made up of the vertices you have, and use glDrawElements to draw them.
Also note that some of your polygons on the original image are in fact not triangles. You'll probably want to insert additional triangles for those polygons (the inside of the eyes).
Related
So I am making that simple system, that makes path in 2D triangle, but then I have to find its equivalent on 3D one. That would not be as much of a hassle, if said triangle would not be from texture, meaning that it may have different angles etc. from the one marked on .png file. Finding traingle points is one thing, but I also need points inside said triangle with the same relative distances from all corners. I have no idea how to do it. Is there any simple way to do it?
EDIT:
To elaborate a little:
I have 3D mesh on which there is a texture applied in external program (e.g. blender). Mesh's triangle's geometry may vary while applying (the whole point of process of texture mapping, to be able to adjust shape and size to image), but the distances describing points on image are set to certain vertices on model. I load it in my program, read the coords of triangles, as well as coords of texture in range (0, 1) for each vertex. Now I load texture file, extract needed information (tool paths based on colors on geometry), but the paths I generate still are 2D and in texture image scale. I need to scale it to real size (points of the triangle) and keep the found paths on surface of such model, thus I need to find points that are in the same distance from each corner and are on a plane which triangle is.
EDIT2:
The path that I have in 2D comes from gradual scalling down of the outline of shape detected on texture. I turn the image to binary, get outline and scale down several times to get this concentric paths. It is described in image coordinate space, because it is directly from image. Now this concentric path needs to be converted to mesh's coordinates.
I'm using OpenGL to develop a 2D game. and I'm trying to map a texture around a circle, as shown on image below. I have noticed that many games have used this technique because it can save the size of texture resources.
But I don't know which texture mapping technique it used. Any suggestions?
Just like pointed out by genpfault.
Create a bunch of Quads along two circles. Set their UV coordinates A, B, C, D like shown in the picture. To get the point C, just add the distance h to the Vector Center -> B
PS: you will need a lot more quads then i drew
Generate a donut of quads with appropriate texture coordinates.
I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:
I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)