OpenGL create trees - opengl

I am using PyOpenGL with PyGame (although I am also trying to copy the game into c++ as well), and I would like to draw some low-poly trees in my game, something like the one in the picture below.
But at the moment I only know how to draw simple flat surfaces and put textures on them (by creating an array of x,y,z coordinates and texture coordinates and using glDrawArrays). Is there a way to make something like the tree below using only opengl (would it involve 3d texture coordinates?) or do I need an external graphics engine?
If I do need a graphics designer, does anyone have any recommendations, and then am I right that I would need to pass the vertices to an array in python and then use that in glDrawElements?

After some point, you cannot handle complex objects by just defining 3D vertexes in Opengl. Instead you need object model that you can include it to your project. Most of the objects models are come with their texture files and texture coordinates included so you don't need to worry about texturing them.
For loading objects into your scene, I suggest you to use assimp library. And after you setup your environment, only thing you have to do is search for free low poly tree models. Here is a webpage that you can find free low poly trees : http://www.loopix-project.com/

Related

Creating a 3d card using DirectXTK with Dynamic texture on one side

I am trying to create a card game. I want to have a deck of cards where the back of the card is a fixed texture but the front is dynamic, i.e. it has some text fields on it as well as a picture. I have created a box sized 3x2x0.16 to represent my card. I can get the fixed texture to load but I cannot find any code examples on the web that show me how to load a fixed texture on one side of the box and a dynamic one on the other. Can anyone point me to some examples please. I'm using DirectXTK mainly, but can probably fathom it out from any DirectX code too.
DirectX11 is version of DirectX I am using.
Any recommendations on how to do this would also be welcome.
Thanks
Easiest method for generating your cards, depending on how many there are and how large you want them, is to generate the faces at startup by using render to texture. Effectively, draw your dynamic card faces exactly like you would draw them in the world, but use an orthographic projection matrix and a blank 2D texture object as the render target. Once you have that, cache these "dynamic" textures in an std::map and bind them when drawing a specific card.
If your faces are relatively small, or you want to save on texture memory, you can stitch multiple card faces together into a large sheet of textures, then use some shader scaling logic to reference a subsection of the sheet for rendering a specific texture. With this, you can assemble "decks" of cards that only contain the faces in use in that particular game, allowing you to evict the others from GPU RAM.

How to draw texts on a 3D objects (such as sphere)

I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.

A couple of textures into one in Blender

I have some model in Blender. I'd like to:
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender.
My questions:
Can I do 1. and 2. phases exclusively in Blender?
Is Blender Bake technique is what I'm searching for?
Is there some tutorials shows how to do it? (for this one specific problem)
Maybe somebody advise me another Blender technique (or OpenGL
solution)
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
You mean generating a texture atlas?
Can I do 1. and 2. phases exclusively in Blender?
No. But it would be surely a well received add-in.
Is Blender Bake technique is what I'm searching for?
No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials.
Maybe somebody advise me another Blender technique (or OpenGL solution)
I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
Another possibility is to use projection painting. An object in blender can have multiple uvmaps, if importing it doesn't create each uvmap then you may need to align each one by hand. Then you create a new uvmap that lays the entire model onto one image.
In Texture painting mode you can use projection painting to use the material from one uvmap as the paint brush for painting onto the new image.

OpenGL Texture Mapping

I am new to game programming and graphics programming. However, I eagerly wish to learn, so I have begun building a game engine with OpenGL.
I have implemented all of the basic graphical features, and now I want to add texture support for my triangle meshes.
The only tutorials I can find for texture mapping is for a single polygon - how do I define a texture that wraps around the entire mesh?
I am loading the meshes from .3ds files using lib3ds (http://code.google.com/p/lib3ds/). Do .3ds file carry some texture coordinate data or something?
Here's a page showing an example of reading out the texture coordinates:
http://newsgroups.derkeiler.com/Archive/Comp/comp.graphics.api.opengl/2005-07/msg00168.html
However, not all 3ds files contain texture information - see warning in:
http://www.groupsrv.com/computers/about186619.html
If your models are much more complex than cubes, you use a UV map to translate the 3-dimensional surface of your model into a flat image for texture mapping.
Looks like this thread on gamedev has an example of how to extract what 3DS calls "texels" as well as materials.

Dynamically generate Triangle Lists for a Complex 3D Mesh

In my application, I have the shape and dimensions of a complex 3D solid (say a Cylinder Block) taken from user input. I need to construct vertex and index buffers for it.
Since the dimensions are taken from user input, I cannot user Blender or 3D Max to manually create my model. What is the textbook method to dynamically generate such a mesh?
I am looking for something that will generate the triangles given the vertices, edges and holes. Something like TetGen, though TetGen has no way of excluding the triangles which fall on the interior of the solid/mesh.
Sounds like you need to create an array of verticies, and a list of triangles each of which contains a list of 3 indicies into the vertex array. There is no easy way to do this. To draw a box, you need 8 veticies and 12 triangles (2 per side). Some representations will use explicit edge representations too. I suspect this is way more work than you want to do so.....
What you need is a mesh library that can do CSG (composite solid geometry). This way you should be able to specify the dimensions of the block, and then the dimensions of the cylinders and tell it to cut them out for you (CSG difference). All the vertex and triangle management should be done for you. In the end, such a library should be able to export the mesh to some common formats. Only problem here is that I don't know the name of such a library. Something tells me that Blender can actually do all of this if you know how to script it. I also suspect there are 1 or 2 fairly good libraries out there.
Google actually brought me back to StackOverflow with this:
A Good 3D mesh library
You may ultimately need to generate simple meshes programatically and manipulate them with a library if they don't provide functions for creating meshes (they all talk about manipulating a mesh or doing CSG).
It depends a bit on your requirements.
If you don't need to access the mesh after generating, but only need to render it, the fastest option is to create a vertex buffer with glGenBuffers, map it into memory with glMapBuffer, write your data into the buffer, then unmap it with glUnmapBuffer. Drawing will be very fast because all data can be uploaded to video card memory.
If you do need to access the mesh data after generating it, or if you expect to modify it regularly, you might be better off building your vertex data in a regular array and using vertex arrays with glVertexPointer and friends.
You can also use a combination: generate the mesh data in main memory, then memcpy() it into a mapped vertex buffer.
Finally, if by "dimensions" you mean just scaling the entire thing, you can create it offline in any 3D modelling program and use the OpenGL transformations, for example glScale, to apply the dimensions to the mesh while rendering.
I'm not sure if the Marching Cube algorithm would be any help?.