Now I work on .obj loader for my 3d editor and plan to build it on Assimp. In my editor meshes will have a quad wireframe above triangulated polygons and have possibility to take both triangles forming a poltgon. But I know Assimp rebuild data for Opengl-ready and don't let to use quads. In my plan to stay data as .obj (quads) and do not triangulate it. If I remove aiProcess_Triangulate my render will corrupt and it doesn't render correctly. Which is a best way to stay data as quads without duplicate data with possibility to interact with it, and to prepare it for rendering? Can Assimp provide this option? Maybe only one way is to make loader myself?
depends of what you mean by load. GL_QUADS removed in 3.1+ and renderer wouldn't recognise it,but for scene building it's still useful.I can say that only trouble with own obj loader i'd have for now is sscanf and float,coz sscanf wanna no dot but comma delimeter. https://rocketgit.com/user/bowler17/gl/source/tree/branch/wrench
Related
I am in front a simple issue, but I can't find a way to solve it:
I have the coordinates of a lightning source. I would like to draw a white circle centered on this lightning source.
How can I do that? Is there a opengl function or I should add manually verteces to create a circle?
Thanks
OpenGL does not have primitives like circles. It only has triangles, fundamentally.
Your best options are either to make a regular n-gon where n is large enough to satisfy you, or make the circle geometry part of a texture, and just render a square where some of the coordinates are transparent.
Which is most appropriate depends entirely on context.
Use Blender to create a simple circle mesh. Export to one of the available object files, load it in your app and render. You can use Assimp to load the mesh or write your own loader. You can find a lot of examples online on how to do this.
I am using PyOpenGL with PyGame (although I am also trying to copy the game into c++ as well), and I would like to draw some low-poly trees in my game, something like the one in the picture below.
But at the moment I only know how to draw simple flat surfaces and put textures on them (by creating an array of x,y,z coordinates and texture coordinates and using glDrawArrays). Is there a way to make something like the tree below using only opengl (would it involve 3d texture coordinates?) or do I need an external graphics engine?
If I do need a graphics designer, does anyone have any recommendations, and then am I right that I would need to pass the vertices to an array in python and then use that in glDrawElements?
After some point, you cannot handle complex objects by just defining 3D vertexes in Opengl. Instead you need object model that you can include it to your project. Most of the objects models are come with their texture files and texture coordinates included so you don't need to worry about texturing them.
For loading objects into your scene, I suggest you to use assimp library. And after you setup your environment, only thing you have to do is search for free low poly tree models. Here is a webpage that you can find free low poly trees : http://www.loopix-project.com/
I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.
Every time I try to export a obj from some 3d model making program it exports without indices for the texture coordinates. I don't want x//xn y//yn z//zn I want x/xn/u y/yn/v z/zn/w. I've tried blender and maya. Maya doesn't have a option for exporting these. But blender will let you choose whether you want to write normals and texture points. How can I get the texture point indices in there?
Blendr's site mentions that it can export UVs, did you check that? Blender's Wavefront OBJ options
I have some model in Blender. I'd like to:
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender.
My questions:
Can I do 1. and 2. phases exclusively in Blender?
Is Blender Bake technique is what I'm searching for?
Is there some tutorials shows how to do it? (for this one specific problem)
Maybe somebody advise me another Blender technique (or OpenGL
solution)
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
You mean generating a texture atlas?
Can I do 1. and 2. phases exclusively in Blender?
No. But it would be surely a well received add-in.
Is Blender Bake technique is what I'm searching for?
No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials.
Maybe somebody advise me another Blender technique (or OpenGL solution)
I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
Another possibility is to use projection painting. An object in blender can have multiple uvmaps, if importing it doesn't create each uvmap then you may need to align each one by hand. Then you create a new uvmap that lays the entire model onto one image.
In Texture painting mode you can use projection painting to use the material from one uvmap as the paint brush for painting onto the new image.