I'm using a program called sculptris to create models in Wavefront OBJ format. I just created my first couple models and am now trying to import them into an OpenGL scene. I've never written an object loader before tonight but I'm pretty sure I got the parsing of the OBJ file right. Unfortunately, when I add lighting, it seems that the normals on half of the model are incorrect. Without lighting, the model is textured and colored correctly and looks perfect. With lighting the image looks like this...
If half the model is correct, I'm pretty sure there is nothing wrong with my OBJ parsing. Therefore sculptris must only have the normals correct for half the model (probably something resulting from the symmetry of the sculpting). If anyone's familiar with the program, know what I'm doing wrong. For those that are just familiar with OBJ in general, is there something I don't know about OBJ involving duplicate normals (which there are because the model is left-right symmetrical)?
This model is symmetrical. So there's a fairly good chance that you created it by creating half of it and then mirroring it. In many modelling applications, mirroring inverts the normal and changes the winding order. So you will have to select those faces and flip their normals.
Related
I'm trying to upload a 3d model using assimp. There is some strange thing going on with the depth. When I use the left-handed projection/view matrix the model looks like this. (The floor and parts of the roof of houses disappear).
When I use right-handed projection/view matrix, the model looks like this. (The wall disappears).
I checked that the depth buffer is enabled, but for some reason, such a strange thing happens anyway. Does anyone have any idea what the problem might be? By the way, I tested this model by using assim and opengl and in opengl the model looks good.
I tested the model using the code from their github without changing anything:
SimpleTexturedDirectx11
Using VTK version 5.1, I'm having some issues with some models not displaying correctly in OpenGL.
The process for getting the models into VTK is a bit roundabout, but gets there and is fairly simple. Each model is a manifold mesh comprised of only quads and tris.
Blender models->custom export format containing points, point normals, and polygons
Custom export format->Custom C++ parser->vtkPolyData
vtkPolydata->vtkTriangleFilter->vtkStripper->vtkPolyDataNormals->final product
As our final product was showing irregular and missing normals when it was rendered, I had VTK write the object to a plaintext file, which I then parsed back into Blender using python.
Initial results were that the mesh was correct and matched the original model, however, when I used the Blender "select non-manifold" option, about 15% of the model showed to be nonmanifold. A bit of reading around online suggested the "remove doubles" as a solution, which did in fact solve the issue of making the mesh closed, but the normals were still irregular.
So, I guess I'm hoping there are some additional options/functions/filters I can use to ensure the models are properly read and/or processed through the filters.
This was solved by requesting that Blender perform a triangulization of the mesh prior to the export operation.
The mangling was due to Blender performing implicit triangulization of quads, resulting in faces which were stored as 4 non-coplanar points. By forcing explicit triangulation in advance, I was able to successfully perform the export and maintain model integrity/manifold-ness. The holes that were being experienced were due to the implicit triangulation not being copied by the exporter and thus causing loss of data.
I want to model the (biological) cell division process. I have been able to create a 3D cell model and load the model (using glm library). However, I do not know how to make it divide and I don't know where to start.
Does any one know how to make the effect that things replicate in OpenGL? (It is great if I can use glut and glm for that). Maybe you could just show me how to make a sphere replicate.
I think what you're looking for is called meta-particles or meta-balls. I think that by adjusting thresold function you can get cell-divide effect, but this isn't guaranteed - metaballs normally look more like quicksilver and are used to create water out of particles.
They're hard to implement in 3d for a novice - you'll need to be able to make triangular mesh out of mathematically-defined surface (marching cubes algorithm), and result isn't guaranteed to be fully realistic.
I suggest to try something else or use some cheaper way - draw two seme-transparent spheres on top of each other then move them apart or something like that.
Of course, certain way to get desired result is to use modeling package (like blender) and skilled artist, but displaying modeled result in your application will be difficult, because object topology will be changing every frame, plus making satisfactory result will take time and skill.
I'm working on making a new visualization of the type of binary stars I study, and I'm starting from an existing code that renders a nice view of them given some sensible physical parameters.
I would like a bit more freedom on the animation side of things, however, and my first thought was to output the models made by the program in a format that could be read in by something else (Blender?) I've read up on the (Wavefront?) .OBJ format, and while it seems straightforward, I can't seem to get it right; importing fails silently, and I suspect it's because I'm not understanding how the objects are actually stored.
The program I'm starting from is a C++ project called BinSim, and it already has a flag to output vertices to a log file for all the objects created. It seems pretty simple, just a list of indices, x, y, z, and R, G, B (sometimes A) values. An example output format I've been working with can be found here; Each object is divided up into a latitude/longitude grid of points, and this is a small snippet (full file is upwards of 180 MB for all the objects created).
I've been able to see that the objects are defined as triangle strips, but I'm confused enough by all of this that I can't see the clear path towards making this list of vertices into an .OBJ (or whatever) format. Sorry if this really belongs in another area (GameDev?), and thanks!
OpenGL is not a scene management system. It's a drawing API and starting off OpenGL data structures for model storage is tedious. As already said, OpenGL draws things. There are several drawing primitives, the triangle strip being one of them. You start with two vertices (forming a line) and each next incoming vertex extends the line of the last two specified vertices to a triangle. The Wavefront OBJ format doesn't know triangle strips, you'd have to break them down into individual triangles, emulating the way OpenGL does it.
Also don't forget that Blender is easily extensible using Python scripting and you can just write a import script for whatever data you already have without going through the hassle of using some ill suited format.
So I've got this class where I have to make a simple game in OpenGL.
I want to make space invanders (basically).
So how in the world should I make anything appear on my screen that looks decent at all? :(
I found some code, finally, that let me import a 3DS object. It was sweet I thought and went and put it in a class to make it a little more modular and usable (http://www.spacesimulator.net/tut4_3dsloader.html).
However, either the program I use (Cheetah3d) is exporting the uv map incorrectly and/or the code for reading in a .bmp that ISN'T the one that came with the demo. The image is all weird. Very hard to explain.
So I arrive at my question. What solution should I use to draw objects? Should I honestly expect to spend hours guessing at vertices to make a space invader ship? Then also try to map a decent texture to this object as well? The code I am using draws the untextured object just fine but I can't begin to go mapping the texture to it because I don't know what vertices correspond to what polygons etc.
Thanks SO for any suggestions on what I should do. :D
You could draw textured quads, provided you have a texture loader.
I really wouldn't worry too much about your "uv map" - if you can get your vertices right then you can generally cludge something anyway. That's what I'd do.