OpenGL Texture Mapping - c++

I am new to game programming and graphics programming. However, I eagerly wish to learn, so I have begun building a game engine with OpenGL.
I have implemented all of the basic graphical features, and now I want to add texture support for my triangle meshes.
The only tutorials I can find for texture mapping is for a single polygon - how do I define a texture that wraps around the entire mesh?
I am loading the meshes from .3ds files using lib3ds (http://code.google.com/p/lib3ds/). Do .3ds file carry some texture coordinate data or something?

Here's a page showing an example of reading out the texture coordinates:
http://newsgroups.derkeiler.com/Archive/Comp/comp.graphics.api.opengl/2005-07/msg00168.html
However, not all 3ds files contain texture information - see warning in:
http://www.groupsrv.com/computers/about186619.html

If your models are much more complex than cubes, you use a UV map to translate the 3-dimensional surface of your model into a flat image for texture mapping.
Looks like this thread on gamedev has an example of how to extract what 3DS calls "texels" as well as materials.

Related

3D texture sampling in OpenGL

I am currently trying to learn ray casting on a 3D texture using something like glTexImage3D. I was following this tutorial from the start. My ultimate goal is to produce a program which can work like this:
My understanding is that this was rendered using a raycasting method and the model is imported as a 3d texture. The raycasting and texture sampling tasks were performed in the fragment shader. I hope I can replicate this program as a practice. Could you kindly answer my questions?
What file format should be used to import the 3D texture?
Which glsl functions should I use in detecting the distance between my ray and the texture?
What are the differences of 3D texture sampling and volume rendering?
Are there any available online tutorials for me to follow?
How can I produce my own 3D texture? (Is it possible to make one using blender?)
1. What file format should be used to import the 3D texture?
Doesn't matter, OpenGL doesn't deal with file formats.
2. Which glsl functions should I use in detecting the distance between my ray and the texture?
There's no "ready do use" raycasting function. You have to implement a raycaster yourself. I.e. between a start and end point sample the texture along a line (ray) and integrate the samples up to a final color value.
3. What are the differences of 3D texture sampling and volume rendering?
Sampling a 3D texture is not much different from sampling 2D, 1D, cubemap or whatever else the topology of a texture. For a given vector A a certain vector B is retured, namely either the value of the sample that's closest to the location pointed to by A (nearest sample) or a interpolated value.
4. Are there any available online tutorials for me to follow?
http://www.real-time-volume-graphics.org/?page_id=28
5. How can I produce my own 3D texture? (Is it possible to make one using blender?)
You can certainly use Blender, e.g. by baking volumetric data like fog density. But the whole subject is too broad to be sufficiently covered here.

OpenGL - texturing mapping 3D object

I have model of skull loaded from .obj file based on this tutorial . As long as I understand texture mapping of cube (make triangle on texture in range of [0,1], select one of six side, select triangle of two triangles on this side and map it with your triangle from texture), I have problem with thinking for any solution to texture mapping my skull. There are few thousands of triangles on it and I think that texture mapping them manually is more than wrong.
Is there any solution for this problem? I'll appreciate any piece of code since it may tell me more than just description of solution.
You can generate your UV coordinates automatically, but this will probably produce badly looking ouput except for very simple textures.
For detailed textures that have eyes, ears, etc., you need to crate your UV coordinates by hand in some 3d modeling tool like is Blender 3d, 3DS Max etc... There is a lot of tutorials all over the internet how to do that. (https://www.youtube.com/watch?v=eCGGe4jLo3M)

OpenGL create trees

I am using PyOpenGL with PyGame (although I am also trying to copy the game into c++ as well), and I would like to draw some low-poly trees in my game, something like the one in the picture below.
But at the moment I only know how to draw simple flat surfaces and put textures on them (by creating an array of x,y,z coordinates and texture coordinates and using glDrawArrays). Is there a way to make something like the tree below using only opengl (would it involve 3d texture coordinates?) or do I need an external graphics engine?
If I do need a graphics designer, does anyone have any recommendations, and then am I right that I would need to pass the vertices to an array in python and then use that in glDrawElements?
After some point, you cannot handle complex objects by just defining 3D vertexes in Opengl. Instead you need object model that you can include it to your project. Most of the objects models are come with their texture files and texture coordinates included so you don't need to worry about texturing them.
For loading objects into your scene, I suggest you to use assimp library. And after you setup your environment, only thing you have to do is search for free low poly tree models. Here is a webpage that you can find free low poly trees : http://www.loopix-project.com/

How to draw texts on a 3D objects (such as sphere)

I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.

Merging a Sphere and Cylinder

I want to render a spring using spheres and cylinders. Each Cylinder has two Spheres at each end and all the cylinders are placed along the spring centre line. I could achieve this .. and rendering is good. I am presently doing it using gluSphere and gluCylinder.
Now when I look at the performance its not good its very slow. So I want to know if the following are possible :
Is it possible that I combine the surfaces of the spheres and cylinders and render only the outer hull but not the inner covered parts of the sphere ... ?
I also read about VBOs .. is it possible to use gluSphere and gluCylinder with VBOs .. ?
I cannot use a display list because the properties of the spring keep changing ... !
Can any one suggest a better suggestion?
You might want to reconsider the way you are drawing springs. In my opinion there are two valid approaches.
Load a spring model using Assimp or some other model loading software that is easily integrated with OpenGL. Free 3D models can be found at Turbo Squid or through Google's 3D warehouse (while in Google Sketch-Up).
Draw the object purely in OpenGL. The idiomatic way to draw this kind of object using the post fixed function OpenGL pipeline is by drawing volumetric 3D lines. The more lines you draw the more curvature you can give to your spring at the expense of rendering time.
For drawing springs I would recommend that you define a set of points (with adjacency) that define the shape of your spring and draw these points with a primitive type of GL_LINE_STRIP_ADJACENCY. Then, in the shader program use a geometry shader to expand this pixel-bound line strip into a set of volumetric 3D lines composed of triangle strips.
This blog post gives an excellent description of the technique.
Your best bet would probably be to take a quick tutorial in any 3D modeling software (Blender comes to mind) and then model your spring in its rest pose using CSG operations.
This approach not only rids you of redundant primitives but also makes it very easy to use your model with VBOs. All you have to do is to parse the output file of Blender (easiest would be .obj), retrieving arrays filled with vertex data (positions, normals, possibly texture coordinates).
Lastly, to "animate" your spring, you can use the vertex shader. You just have to pass it another uniform describing how much the spring is deformed and do the rest of the transformation there.