Way to speed up .obj file loader for HalfEdge Mesh? - c++

I am working on a 3D OpenGl (C++) application in which I have my own Mesh structure based on the Half-Edge data structure. I want to build a simple way to load Wavefront obj files into my mesh structure. Of course, I can do so naively line by line, but there has to be some more efficient way (I know professional applications aren't loading the file naively line by line, it would be too slow for millions of vertices).
Can anyone point me to a tutorial or an example of a really fast OBJ loader? It would be preferable if it had something to do with a Half Edge data structure.
Edit:
There are two basic issue I am looking to get around
1) Avoid the general slowness of reading floating point numbers from a file
2) How do i intelligently determine the "adjacent" halfedge for each edge on the fly. I am imagining some sort of hashing function to determine if the symmetric or next edge for the edge being created already exists and, if so, use that pointer.

I had a similar issue loading OBJ files a while ago, albeit I was searching for shared vertices as opposed to edges. Since the file format itself contains no connectivity information the best way is use a std::set. Each time you want to add an edge to your data structure you can search the set to see if it already exists. Set searching is logarithmic in complexity so it scales well with the size of your data structure. The only way to avoid this that I can think of is to choose a file format that contains the connectivity information you need, or as Michael Slade suggested create your own format and conversion tool.

Reading and decoding ascii files is slow, particularly if the files have a million floating point numbers to convert.
My idea: write a program in any language you desire, to translate the .obj files to a binary format your program can read more-or-less directly into memory. Then run that program on the .obj files you want to load and have your program load the translated files.
For extra points, you could have your opengl program do this translation on-the-fly and cache the results, checking file modification times and updating the cache as necessary.

Related

Compare 3d objects (wavefront OBJ) programmatically to verify if they are the same geometry

I am using C++ to compare 3d models of molecules; but I am having hard time to figure out exactly what would be the best way to do so.
I did start reading the OBJ as text file, reading line by line inside an array, and then doing the same with the second object to see if they are the same. I did take inspiration from image comparison; where I read about common algorithms that involve transforming the image in a bytestream and use openCV2 to create histograms and a 2d matrix to compare the 2 images.
Is there another way, that is more appropriate, and allow me to compare models, even if they may have for example, different materials associated to it? All that I am concerned is the geometry, to be sure that it is the same molecule.

Joining two meshes into one

Suppose I have two meshes stored in any sane format (e.g. wavefront .obj or collada .dae), and I want to combine them into one mesh programmatically. More precise, I have a landscape and an object as two meshes. I want to put object into landscape after performing transformation to it, so it gets on the right place, and export this as result model.
As far as I understood, in assimp there is something similar named SceneCombiner, yet it seems that this is internal structure and has no interface (even though here https://github.com/assimp/assimp/issues/584 the ticket concerning it is closed, I couldn't find out how to use it).
Maybe I should use CGAL or something like that? I don't have very much experience in CG libraries, so any advice will be really useful!
You can do that with CGAL. You would read two meshes, and the call copy_face_graph(), and then write the mesh back.

How to import fbx key frame animation into my own game engine?

I am building a small game engine and using fbx sdk to ipmort fbx mesh and animation, which means I want to store animation in my own class. Now, there are two ways to achieve this:
The first way is to store key frames only. If I store key frames only, I will have the raw animation data within the fbx file, and I can manipulate or adjust the animation whenever I want.
The second way is to sample frames at a fixed rate. Instead of storing key frames, I obtain some of the frames that I need from the fbx file and store them. This way, I may lost the raw animation data, because when I sample frames, the chosen frames may not be the key frames, resulting in minor loss of details.
From my perspective, the first way is the best way but most of the tutorails on the Internet are using the second way. For example, here. Below is a snippet of it:
for (FbxLongLong i = start.GetFrameCount(FbxTime::eFrames24); i <= end.GetFrameCount(FbxTime::eFrames24); ++i)
{
FbxTime currTime;
currTime.SetFrame(i, FbxTime::eFrames24);
*currAnim = new Keyframe();
(*currAnim)->mFrameNum = i;
FbxAMatrix currentTransformOffset = inNode->EvaluateGlobalTransform(currTime) * geometryTransform;
(*currAnim)->mGlobalTransform = currentTransformOffset.Inverse() * currCluster->GetLink()->EvaluateGlobalTransform(currTime);
currAnim = &((*currAnim)->mNext);
}
Notice that the function EvaluateGlobalTransform(..) is an fbxsdk function and it seems the only safe interface between us and an fbx animation on which we can rely. Also it seems the second way(use EvaluateGlobalTransform to sample at a specific rate) is the standard and commonly accepted way to do the job. And there is an explation says:
"The FBX SDK gives you a number of ways to get the data you might
want. Unfortunately due to different DCC tools (Max/Maya/etc) you may
not be able to get exactly the data you want. For instance, let's say
you find the root bone and it has translation on it in the animation.
You can access the transform in a number of ways. You can use the
LclTransform property and ask for the FbxAMatrix at various times. Or
you can call the evaluation functions with a time to get the matrix.
Or you can use the evaluator's EvaluateNode function to evaluate the
node at a time. And finally, the most complicated version is you can
get the curve nodes from the properties and look at the curve's keys.
Given all those options, you might think getting the curves would be
the way to go. Unfortunately Maya, for instance, bakes the animation
data to a set of keys which have nothing to do with the keys actually
setup in Maya. The reason for this is that the curves Maya uses are
not the same as those FBX supports. So, even if you get the curves
directly, they may have hundreds of keys in them since they might have
been baked.
What this means is that basically unless Max has curves supported by
FBX, it may be baking them and you won't have a way to find what the
original two poses in your terms were. Generally you will iterate
through time and sample the scene at a fixed rate. Yup, it kinda sucks
and generally you'll want to simplify the data after sampling."
To sum up:
The first way:
pros: easy to manipulate and adjust, accurate detail, less memory comsuption(if you generate your vertex transformation matrix on the fly)
cons:difficult to get these key frames, not applicable to some fbx files
The second way:
pros:easy to get chosen frames, adaptable to all fbx files
cons:difficult to change the animation, large memory comsuption, inaccurate details
So, my questions are:
Is the second way really is the common way to do this?
Which way do the famous game engines, like Unreal and Unity, use?
If I want to use the first way even though it may not work under some circumstances, how can I get only key frames from an fbx file(i.e. not using EvaluateGlobalTransform but working with FbxAnimStack, FbxAnimLayer, FbxAnimCurveNode, FbxAnimCurve, FbxAnimCurveKey)?

List of verticies from OpenGL program to something importable

I'm working on making a new visualization of the type of binary stars I study, and I'm starting from an existing code that renders a nice view of them given some sensible physical parameters.
I would like a bit more freedom on the animation side of things, however, and my first thought was to output the models made by the program in a format that could be read in by something else (Blender?) I've read up on the (Wavefront?) .OBJ format, and while it seems straightforward, I can't seem to get it right; importing fails silently, and I suspect it's because I'm not understanding how the objects are actually stored.
The program I'm starting from is a C++ project called BinSim, and it already has a flag to output vertices to a log file for all the objects created. It seems pretty simple, just a list of indices, x, y, z, and R, G, B (sometimes A) values. An example output format I've been working with can be found here; Each object is divided up into a latitude/longitude grid of points, and this is a small snippet (full file is upwards of 180 MB for all the objects created).
I've been able to see that the objects are defined as triangle strips, but I'm confused enough by all of this that I can't see the clear path towards making this list of vertices into an .OBJ (or whatever) format. Sorry if this really belongs in another area (GameDev?), and thanks!
OpenGL is not a scene management system. It's a drawing API and starting off OpenGL data structures for model storage is tedious. As already said, OpenGL draws things. There are several drawing primitives, the triangle strip being one of them. You start with two vertices (forming a line) and each next incoming vertex extends the line of the last two specified vertices to a triangle. The Wavefront OBJ format doesn't know triangle strips, you'd have to break them down into individual triangles, emulating the way OpenGL does it.
Also don't forget that Blender is easily extensible using Python scripting and you can just write a import script for whatever data you already have without going through the hassle of using some ill suited format.

efficient TIFF tile extraction C++

I am working with 1gb large tiff images of around 20000 x 20000 pixels. I need to extract several tiles (of about 300x300 pixels) out of the images, in random positions.
I tried the following solutions:
Libtiff (the only low level library I could find) offers TIFFReadline() but that means reading in around 19700 unnecesary pixels.
I implemented my own tiff reader which extracts a tile out of the image without reading in unnecesary pixels. I expected it to be faster, but doing a seekg for every line of the tile makes it very slow. I also tried reading to a buffer all the lines of the file that include my tile, and then extracting the tile from the buffer, but results are more or less the same.
I'd like to receive suggestions that would improve my tile extraction tool!
Everything is welcome, maybe you can propose a more efficient library I could use, some tips about C/C++ I/O, some higher level strategy for my needs, etc.
Regards,
Juan
[Major edit 14 Jan 10]
I was a bit confused by your mention of tiles, when the tiff is not tiled.
I do use tiled/pyramidical TIFF images. I've created those with VIPS
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,pyramid
I think you can do this with :
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,flat
You may want to experiment with tile size. Then you can read using TIFFReadEncodedTile.
Multi-resolution storage using pyramidical tiffs are much faster if you need to zoom in/out. You may also want to use this to have a coarse image nearly immediately followed by a detailed picture.
After switching to (appropriately sized) tiled storage (which will bring you MASSIVE performance improvements for random access!), your bottleneck will be disk io. File read is much faster if read in sequence. Here mmapping may be the solution.
Some useful links:
VIPS
IIPImage
LibTiff.NET stackoverflow
VIPS is a image handling library which can do much more than just read/write. It has its own, very efficient internal format. It has a good documentation on the algorithms. For one, it decouples processing from filesystem, thereby allowing tiles to be cached.
IIPImage is a multi-zoom webserver/browser library. I found the documentation a very good source of information on multi-resolution imaging (like google maps)
The other solution on this page, using mmap, is efficient only for 'small' files. I've hit the 32-bit boundaries often. Generally, allocating a 1 GByte chunk of memory will fail on a 32-bit os (with 4 GBytes RAM installed) due to the fact that even virtual memory gets fragemented after one or two application runs. Still, there is sufficient memory to cache parts or whole of the image. More memory = more performance.
Just mmap your file.
http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
Thanks everyone for the replies.
Actually a change in the way tiles were required, allowed me to extract the tiles from the files in hard disk, in a sequential way, instead of a random way. This allowed me to load a part of the file into ram, and extract the tiles from there.
The efficiency gain was huge. Otherwise, if you need random access to a file, mmap is a good deal.
Regards,
Juan
I did something similar to this to handle an arbitrarily large TARGA(TGA) format file.
The thing that made it simple for that kind of file is that the image is not compressed. You can calculate the position of any arbitrary pixel within the image and find it with a simple seek. You might consider targa format if you have the option to specify the image encoding.
If not there are many varieties of TIFF formats. You probably want to use a library if they've already gone through the pain of supporting all the different formats.
Did you get a specific error message? Depending on how you used that command line, you could have been stepping on your own file.
If that wasn't the issue, try using imagemagick instead of vips if it's an option.