How do I map a texture correctly onto a convex polygon in SFML or OpenGL? - c++

I want to represent my Objects as textured convex Polygons. For the most part those will just be rotated rectangles but i want to support convex shapes too and thats where the problems arise.
I worked with Blender a while ago and there you could unwrap the 3D-Objects and explicetely tell Blender which vertex of the Shape has which Position on the Texture.
Would it maybe be better to just request the Texture to have the size of the bounding Rectangle of the Shape so I can just apply the texture with SFML?
PS: Im sorry i cant post pictures to clarify my question.

or OpenGL
In OpenGL, typically you'll have two (or more!) vertex attributes: position and texture coordinate. That's basically saying which vertex of the Shape has which Position on the Texture.
That's what SFML has to be doing internally, and since its Open-Source, you might just peek inside and see if your "bounding rectangle" idea has a chance of working (my guess is that it indeed does).

Related

Texture mapping with cylinder intermediate surface manually

I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:

opengl selecting area on model

I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.

How to texture Opengl glut objects (C++)

I have already tried and succeeded loading a texture from a bmp file, and drawing quads and triangles with texture. However i need to apply the loaded texture to an object drawn with glutSolidDodecahedron and glutSolidSphere. How can i do this? Please include some code if possible
Note: I HAVE to use those functions, I'm not allowed to draw them from scratch.
Neither glutSolidDodecahedron nor glutSolidSphere specifies texture coordinates, at least not according to any documentation that a quick web search turns up. I had a quick look at the FreeGLUT implementations and those do indeed not specify texture coordinates.
If you can use shaders, you can derive the 2D texture coordinates from the 3D location of the vertices. Spheres and dodecahedrons are pretty regular shapes, so you can simply do a spherical projection (convert the vertex position to spherical coordinates and drop the radius component).

OpenGL texture mapping to already projected shape?

Newbie to OpenGL...
I have some very simple code (non OpenGL) for rotating a rectangle around a single axis, and projecting the result down to screen coordinates. I'm now trying to map a bitmap to the resulting shape using OpenGL. When animating the rotation, the perspective of the bitmap is quite heavily distorted. Is this to be expected? Is there something I can do about it?
I know I can use OpenGL to do the whole thing instead (and that works fine), but for my current project the approach above would suit me better, if I can just get around this perspective issue... I'm thinking maybe there's not enough information after I have projected the rotated rectangle down to 2D space for OpenGL to correctly map the bitmap with the right perspective..?
Any input would be much appreciated.
Thanks,
Daniel
To clarify:
I'm using an orthographic projection, and doing the 3D calculation and projection to 2D myself. Then I just use OpenGL for rendering the resulting shape with a texture.
If you project your coordinates yourself and do the texture mapping in 2D screen coordinates you will loose all projective information and the textures will badly distort.
You can get around this by using a perspective texture mapping. A lot of different ways to do this exist. Either by writing a real perspective texture mapper or by faking and using a plain texture mapper.
Explaining how this works is somewhat beyond the scope of a single question. I assume you read the wiki-page about perspective texture mapping first and try out the subdivision method:
http://en.wikipedia.org/wiki/Texture_mapping
Then come back and ask for detail questions..
I found the following page that explains the subdivision method in detail:
http://freespace.virgin.net/hugo.elias/graphics/x_persp.htm
It worked perfectly! Thanks Nils for pointing me in the right direction.

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)