OpenGL texture mapping to already projected shape? - opengl

Newbie to OpenGL...
I have some very simple code (non OpenGL) for rotating a rectangle around a single axis, and projecting the result down to screen coordinates. I'm now trying to map a bitmap to the resulting shape using OpenGL. When animating the rotation, the perspective of the bitmap is quite heavily distorted. Is this to be expected? Is there something I can do about it?
I know I can use OpenGL to do the whole thing instead (and that works fine), but for my current project the approach above would suit me better, if I can just get around this perspective issue... I'm thinking maybe there's not enough information after I have projected the rotated rectangle down to 2D space for OpenGL to correctly map the bitmap with the right perspective..?
Any input would be much appreciated.
Thanks,
Daniel
To clarify:
I'm using an orthographic projection, and doing the 3D calculation and projection to 2D myself. Then I just use OpenGL for rendering the resulting shape with a texture.

If you project your coordinates yourself and do the texture mapping in 2D screen coordinates you will loose all projective information and the textures will badly distort.
You can get around this by using a perspective texture mapping. A lot of different ways to do this exist. Either by writing a real perspective texture mapper or by faking and using a plain texture mapper.
Explaining how this works is somewhat beyond the scope of a single question. I assume you read the wiki-page about perspective texture mapping first and try out the subdivision method:
http://en.wikipedia.org/wiki/Texture_mapping
Then come back and ask for detail questions..

I found the following page that explains the subdivision method in detail:
http://freespace.virgin.net/hugo.elias/graphics/x_persp.htm
It worked perfectly! Thanks Nils for pointing me in the right direction.

Related

How do I map a texture correctly onto a convex polygon in SFML or OpenGL?

I want to represent my Objects as textured convex Polygons. For the most part those will just be rotated rectangles but i want to support convex shapes too and thats where the problems arise.
I worked with Blender a while ago and there you could unwrap the 3D-Objects and explicetely tell Blender which vertex of the Shape has which Position on the Texture.
Would it maybe be better to just request the Texture to have the size of the bounding Rectangle of the Shape so I can just apply the texture with SFML?
PS: Im sorry i cant post pictures to clarify my question.
or OpenGL
In OpenGL, typically you'll have two (or more!) vertex attributes: position and texture coordinate. That's basically saying which vertex of the Shape has which Position on the Texture.
That's what SFML has to be doing internally, and since its Open-Source, you might just peek inside and see if your "bounding rectangle" idea has a chance of working (my guess is that it indeed does).

OpenGL Perspective Texture Flickering

I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.

3D spheres and adding textures in OpenGL

I have been asked to do 3D sphere and adding textures to it so that it looks like different planets in the Solar System. However 3ds max was not mentioned as mandatory.
So, how can I make 3D spheres using OpenGL and add textures to it? using glutsphere or am I suppose to do it some other method and how to textures ?
The obvious route would be gluSphere (note, it's glu, not glut) with gluQuadricTexture to get the texturing done.
I am not sure if glutSolidSphere has texture coordinates (as far as I can remeber they were not correct, or not existant). I remember that this was a great resource to get me started on the subject though:
http://paulbourke.net/texture_colour/texturemap/
EDIT:
I just remembered that subdividing an icosahedron gives a better sphere. Also texture coordinates are easier to implement that way:
see here:
http://www.gamedev.net/topic/116312-request-for-help-texture-mapping-a-subdivided-icosahedron/
and
http://www.sulaco.co.za/drawing_icosahedron_tutorial.htm
and
http://student.ulb.ac.be/~claugero/sphere/

OpenGL 2d drawing facing the user

I have an OpenGL scene in which the user can rotate the camera. I have some two dimensional shapes that I would like to always face the user. I do have the forward facing vector, and I do have the screen point at which the component should be drawn. I'm not sure the best way to approach this problem - should I be rotating the shape to the forward vector (which I'm not entirely sure how to do correctly)? Or is there another way I can just draw in two dimensions and ignore the rotation of the camera (maybe by using an orthographic projection)? Any sample code for helping with this would be appreciated.
PS - I'm doing this in Java, but the language is irrelevant here (it is just OpenGL specific).
I already answered it in Inverting rotation in 3D, to make an object always face the camera?
My first though is to use the "gluLookAt" matrix.
http://www.opengl.org/resources/faq/technical/viewing.htm
I would say, that you keep the position of the 2d objects, and then take the "eye" or camera position and set that as the target value for the 2d objects. It should keep them facing the camera.

3d rendering of a surface from a depthmap

Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/