C++ how convert a mesh object into an fbx file - c++

I am trying to understand the best approach to converting an 3D object defined by a series of vector coordinates into a .fbx file in within a c++ language environment.
Lets use a simple example: say I have a simple wire-frame cube which exists as a series of 12 vectors (a cube has 12 vertices) which consist of a start and end 3D x, y, z co-ordinate e.g
int vec1[2][3] = {
{0, 0, 0},
{1, 0, 0}};
This is in a sense a mesh object although it is not in any standard .MESH file form.
My question is how best would I go about writing a code to convert this into the correct structure to be saved as an .fbx file.
Additionally I have found online much information regarding:
fbx parsers
fbx writers
fbx sdk
However I do not believe these are exactly what I am looking for (please correct me if I am wrong). In my case, I would like in a sense to generate an .fbx file from scratch with no prior file type to begin with or convert from.
Any information on this topic such as a direct solution or even just the correct terminology that I can then use to direct my own more specific research, would be much appreciated.
Kind Regards,
Ichi.

Related

Can't understand how vertical alignment to a base line should be computed using stb_truetype library for SDF fonts?

So, the question looks simple, but I still can't understand how properly compute a vertical alignment of glyphs when we use SDF generated bitmaps using stb_truetype library.
In nutshell I have own texture packer system that generates a texture atlas with all needed SDF represented glyphs. Also there is a data type that contains the following parameters per code point including width, height, xoff and yoff, which I get from stbtt_GetCodepointSDF function.
I've checked up a few listings including this one, but it didn't help me. So what's the right formula?

pcl::MarchingCubesRBF doesn't output mesh

I need to use Marching Cubes based on Radial Basis Function so I looked up this algorithm implemented in PCL.
Actually I'm using PCL v1.6 so the function is:
pcl::MarchingCubesRBF
The problem is that it doesn't work, that is it doesn't create any triangles: sometimes the output is '0 triangles created', at times running blocks my machine.
Anyway my implementation is:
pcl::MarchingCubesRBF<pcl::PointNormal> mc;
pcl::PolygonMesh::Ptr triangles(new pcl::PolygonMesh);
mc.setInputCloud (cloud_with_normals);
mc.setSearchMethod (tree);
mc.reconstruct (*triangles);
I tried with different files like input but neither of them works. One of it is https://github.com/FabiApfelkern/cloudfinish/blob/master/cat.pcd
I found there was a bug about the implementation in pcl: http://dev.pointclouds.org/issues/768
However I don't understand if it is solved in pcl v1.6. Let me know how could I solve if it is possible.
I'm using C++ with VS2010
I had the same problem and I fixed it setting the grid resolution:
mc.setGridResolution (100, 100, 100);
mc.reconstruct (*triangles);
The grid resolution is the amount of voxels used in x, y and z directions. So if you set it to 1, 1, 1, there will be only one voxel - and thus not a very good representation of your point cloud. The higher the resolution, the more expensive it will be, but it also improves the quality of the resulting mesh.

Matlab griddata equivalent in C++

I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.

Convert HEALPix to cube map

I'm looking for a way to convert fits file using the HEALPix projection to a cube map. I'm particularly interested in converting files from the Planck data release.
My question in pictures:
I have:
I want:
I found a workaround using a function from CMBView: http://code.google.com/p/cmbview/source/browse/src/Classes/CMBdata.m
Result:
https://github.com/hannorein/planck_cmb_cubemaps

Parsing a Wavefront .obj file using C++

While trying to a parse a wavefront .obj file, I thought of two approaches:
Create an 2D array the size of the number of vertices. When a face uses a vertex, get it's coordinates from the array.
Get the starting position of the vertex list and then when a face uses a vertex, scan the lines until you reach the vertex.
IMO, option 1 will be very memory intensive, but much faster.
Since option 2 involves extensive file reading, (and because the number of vertices in most objects becomes very large) this will be much slower, but less memmory intensive.
The question is: Comparing the tradeoff between memory and speed, which option would be better suited to an average computer?
And, is there an alternative method?
I plan to use OpenGL along with GLFW to render the object.
IMO, Option 1 will be very memory intensive, but much faster.
You must get those vertices into memory anyway. But there's no need for a 2D array, which BTW would cause two pointer indirections, thus a major performance hit. Just use a simple std::vector<Vertex> for your data, the vector index is the index for the accompanying face list.
EDIT due to comment
class Vertex
{
union { struct { float x, y, z }; float pos[3] };
union { struct { float nx, ny, nz }; float normal[3] };
union { struct { float s, t }; float pos[2] };
Vertex &operator=();
}
std::vector<Vertex>;
Generally you read the list of vertices into an array. Parsing ASCII text is extremely slow; do it only once when loading the file and then store everything in arrays in memory.
Same goes with the triangles / faces. Each triangle generally is composed of a list of three vertex indexes. That should also be stored in an array.
You may find the OBJ reader in the VTK open source library to be useful: http://www.vtk.org/doc/nightly/html/classvtkOBJReader.html. We use it and have had no reason to write our own... Use VTK directly, or you may find studying the source code to be good for further inspiration of your own reader.
In my opinion, one of the major shortcomings with OBJ files is the use of ASCII. 3D ASCII files (be it STL, PLY, OBJ, etc.) are very slow to load if they are ASCII due to the string parsing. Binary format files are much faster and should always be used if performance is an issue: the load time for a good binary format is instantaneous.
Just load them into arrays. Memory should not be an issue. Your system (usually) has way more memory than your GPU. If you are running into memory problems, you are probably loading a model that is too detailed. (I am semi-assuming that you are going to make a game in OpenGL. If you have a specific need for such large model files, you will still have to work out a way to load the appropriate chunks.)
You shouldn't need a 2 dimensional array. Your models should be triangulated and then you can simply load the obj file using gluts obj loader. Simply store points, faces and normals in 3 seperate arrays/buffers. There is an example how you can do it here, but if you want to do it fast you should go for a binary format.
This is a pretty decent solution for prototyping, running a script that generates the arrays for use in OpenGL or your preferred rendering API. obj2opengl.pl is a perl script, you'll need perl installed that you can find here. GitHub link is here.
While running the perl script you may get a runtime error on line 154 concerning if(defined(#center)). Replace it with if(#center).
From the example, once the header file is generated with the data, you can use the it as shown:
/*
created with obj2opengl.pl
source file : ./banana.obj
vertices : 4032
faces : 8056
normals : 4032
texture coords : 4420
// include generated arrays
#import "./banana.h"
// set input data to arrays
glVertexPointer(3, GL_FLOAT, 0, bananaVerts);
glNormalPointer(GL_FLOAT, 0, bananaNormals);
glTexCoordPointer(2, GL_FLOAT, 0, bananaTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, bananaNumVerts);
*/