Parsing a Wavefront .obj file using C++ - c++

While trying to a parse a wavefront .obj file, I thought of two approaches:
Create an 2D array the size of the number of vertices. When a face uses a vertex, get it's coordinates from the array.
Get the starting position of the vertex list and then when a face uses a vertex, scan the lines until you reach the vertex.
IMO, option 1 will be very memory intensive, but much faster.
Since option 2 involves extensive file reading, (and because the number of vertices in most objects becomes very large) this will be much slower, but less memmory intensive.
The question is: Comparing the tradeoff between memory and speed, which option would be better suited to an average computer?
And, is there an alternative method?
I plan to use OpenGL along with GLFW to render the object.

IMO, Option 1 will be very memory intensive, but much faster.
You must get those vertices into memory anyway. But there's no need for a 2D array, which BTW would cause two pointer indirections, thus a major performance hit. Just use a simple std::vector<Vertex> for your data, the vector index is the index for the accompanying face list.
EDIT due to comment
class Vertex
{
union { struct { float x, y, z }; float pos[3] };
union { struct { float nx, ny, nz }; float normal[3] };
union { struct { float s, t }; float pos[2] };
Vertex &operator=();
}
std::vector<Vertex>;

Generally you read the list of vertices into an array. Parsing ASCII text is extremely slow; do it only once when loading the file and then store everything in arrays in memory.
Same goes with the triangles / faces. Each triangle generally is composed of a list of three vertex indexes. That should also be stored in an array.
You may find the OBJ reader in the VTK open source library to be useful: http://www.vtk.org/doc/nightly/html/classvtkOBJReader.html. We use it and have had no reason to write our own... Use VTK directly, or you may find studying the source code to be good for further inspiration of your own reader.
In my opinion, one of the major shortcomings with OBJ files is the use of ASCII. 3D ASCII files (be it STL, PLY, OBJ, etc.) are very slow to load if they are ASCII due to the string parsing. Binary format files are much faster and should always be used if performance is an issue: the load time for a good binary format is instantaneous.

Just load them into arrays. Memory should not be an issue. Your system (usually) has way more memory than your GPU. If you are running into memory problems, you are probably loading a model that is too detailed. (I am semi-assuming that you are going to make a game in OpenGL. If you have a specific need for such large model files, you will still have to work out a way to load the appropriate chunks.)

You shouldn't need a 2 dimensional array. Your models should be triangulated and then you can simply load the obj file using gluts obj loader. Simply store points, faces and normals in 3 seperate arrays/buffers. There is an example how you can do it here, but if you want to do it fast you should go for a binary format.

This is a pretty decent solution for prototyping, running a script that generates the arrays for use in OpenGL or your preferred rendering API. obj2opengl.pl is a perl script, you'll need perl installed that you can find here. GitHub link is here.
While running the perl script you may get a runtime error on line 154 concerning if(defined(#center)). Replace it with if(#center).
From the example, once the header file is generated with the data, you can use the it as shown:
/*
created with obj2opengl.pl
source file : ./banana.obj
vertices : 4032
faces : 8056
normals : 4032
texture coords : 4420
// include generated arrays
#import "./banana.h"
// set input data to arrays
glVertexPointer(3, GL_FLOAT, 0, bananaVerts);
glNormalPointer(GL_FLOAT, 0, bananaNormals);
glTexCoordPointer(2, GL_FLOAT, 0, bananaTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, bananaNumVerts);
*/

Related

C++ how convert a mesh object into an fbx file

I am trying to understand the best approach to converting an 3D object defined by a series of vector coordinates into a .fbx file in within a c++ language environment.
Lets use a simple example: say I have a simple wire-frame cube which exists as a series of 12 vectors (a cube has 12 vertices) which consist of a start and end 3D x, y, z co-ordinate e.g
int vec1[2][3] = {
{0, 0, 0},
{1, 0, 0}};
This is in a sense a mesh object although it is not in any standard .MESH file form.
My question is how best would I go about writing a code to convert this into the correct structure to be saved as an .fbx file.
Additionally I have found online much information regarding:
fbx parsers
fbx writers
fbx sdk
However I do not believe these are exactly what I am looking for (please correct me if I am wrong). In my case, I would like in a sense to generate an .fbx file from scratch with no prior file type to begin with or convert from.
Any information on this topic such as a direct solution or even just the correct terminology that I can then use to direct my own more specific research, would be much appreciated.
Kind Regards,
Ichi.

DXR Descriptor Heap management for raytracing

After watching videos and reading the documentation on DXR and DX12, I'm still not sure how to manage resources for DX12 raytracing (DXR).
There is quite a difference between rasterizing and raytracing in terms of resource management, the main difference being that rasterizing has a lot of temporal resources that can be bound on the fly, and raytracing being in need of all resources being ready to go at the time of casting rays. The reason is obvious, a ray can hit anything in the whole scene, so we need to have every shader, every texture, every heap ready and filled with data before we cast a single ray.
So far so good.
My first test was adding all resources to a single heap - based on some DXR tutorials. The problem with this approach arises with objects having the same shaders but different textures. I defined 1 shader root signature for my single hit group, which I had to prepare before raytracing. But when creating a root signature, we have to exactly tell which position in the heap corresponds to the SRV where the texture is located. Since there are many textures with different positions in the heap, I would need to create 1 root signature per object with different textures. This of course is not preferred, since based on documentation and common sense, we should keep the root signature amount as small as possible.
Therefore, I discarded this test.
My second approach was creating a descriptor heap per object, which contained all local descriptors for this particular object (Textures, Constants etc..). The global resources = TLAS (Top Level Acceleration Structure), and the output and camera constant buffer were kept global in a separate heap. In this approach, I think I misunderstood the documentation by thinking I can add multiple heaps to a root signature. As I'm writing this post, I could not find a way of adding 2 separate heaps to a single root signature. If this is possible, I would love to know how, so any help is appreciated.
Here the code I'm usign for my root signature (using dx12 helpers):
bool PipelineState::CreateHitSignature(Microsoft::WRL::ComPtr<ID3D12RootSignature>& signature)
{
const auto device = RaytracingModule::GetInstance()->GetDevice();
if (device == nullptr)
{
return false;
}
nv_helpers_dx12::RootSignatureGenerator rsc;
rsc.AddRootParameter(D3D12_ROOT_PARAMETER_TYPE_SRV,0); // "t0" vertices and colors
// Add a single range pointing to the TLAS in the heap
rsc.AddHeapRangesParameter({
{2 /*t2*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1}, /* 2nd slot of the first heap */
{3 /*t3*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 3}, /* 4nd slot of the first heap. Per-instance data */
});
signature = rsc.Generate(device, true);
return signature.Get() != nullptr;
}
Now my last approach would be to create a heap containing all necessary resources
-> TLAS, CBVs, SRVs (Textures) etc per object = 1x heap per object effectively. Again, as I was reading documentation, this was not advised, and documentation was stating that we should group resources to global heaps. At this point, I have a feeling I'm mixing DX12 and DXR documentation and best practices, by using proposals from DX12 in the DXR domain, which is probably wrong.
I also read partly through Nvidia Falcor source code and they seem to have 1 resource heap per descriptor type effectively limiting the number of descriptor heaps to a minimum (makes total sense) but I did not jet find how a root signature is created with multiple separate heaps.
I feel like I'm missing one last puzzle part to this mystery before it all falls into place and creates a beautiful image. So if anyone could explain how the resource management (heaps, descriptors etc.. ) should be handled in DXR if we want to have many objects which different resources, it would help me a lot.
So thanks in advance!
Jakub
With DXR you need to start at shader model 6.2 where dynamic indexing started to have a much more official support than just "the last descriptor is free to leak in seemingly-looking overrun indices" that was the "secret" approach in 5.1
Now you have full "bindless" using a type var[] : register(t4, 1); declarative syntax and you can index freely var[1] will access register (t5,1) etc.
You can setup register ranges in the descriptor table, so if you have 100 textures you can span 100.
You can even declare other resources after the array variable as long as you remember to jump all the registers. But it's easier to use different virtual spaces:
float4 ambiance : register(b0, 0);
Texture2D all_albedos[] : register(t0, 1);
matrix4x4 world : register(b1, 0);
Now you can go to t100 with no disturbance on the following space0 declarations.
The limit on the the register value is lifted in SM6. It's
up to max supported heap allocation
So all_albedos[3400].Sample(..) is a perfectly acceptable call (provided your heap has bound the views).
Unfortunatly in DX12 they give you the feeling you can bind multiple heaps with the CommandList::SetDescriptorHeaps function, but if you try you'll get runtime errors:
D3D12 ERROR: ID3D12CommandList::SetDescriptorHeaps: pDescriptorHeaps[1] sets a descriptor heap type that appears earlier in the pDescriptorHeaps array.
Only one of any given descriptor heap type can be set at a time. [ EXECUTION ERROR #554: SET_DESCRIPTOR_HEAP_INVALID]
It's misleading so don't trust that plural s in the method name.
Really if we have multiple heaps, that would only be because of triple buffering circular update/usage case, or upload/shader-visible I suppose. Just put everything in your one heap, and let the descriptor table index in it as demanded.
A descriptor table is a very lightweight element, it's just 3 ints. A descriptor start, a span and a virtual space. Just use that, you can span for 1000 textures if you have 1000 textures in your scene. You can get the material ID if you embed it into an indirection texture that would have unique UVs like a lightmap. Or in the vertex data, or just the whole hitgroup (if you setup for 1 hitgroup = 1 object). Your hitgroup index, which is given by a system value in the shader, will be your texture index.
Dynamic indexing of HLSL 5.1 might be the solution to this issue.
https://learn.microsoft.com/en-us/windows/win32/direct3d12/dynamic-indexing-using-hlsl-5-1
With dynamic indexing, we can create one heap containing all materials and use an index per object that will be used in the shader to take the correct material at run time
Therefore, we do not need multiple heaps of the same type, since it's not possible anyway. Only 1 heap per heap type is allowed at the same time

How do images work in opencl kernel?

I'm trying to find ways to copy multidimensional arrays from host to device in opencl and thought an approach was to use an image... which can be 1, 2, or 3 dimensional objects. However I'm confused because when reading a pixle from an array, they are using vector datatypes. Normally I would think double pointer, but it doesn't sound like that is what is meant by vector datatypes. Anyway here are my questions:
1) What is actually meant to vector datatype, why wouldn't we just specify 2 or 3 indices when denoting pixel coordinates? It looks like a single value such as float2 is being used to denote coordinates, but that makes no sense to me. I'm looking at the function read_imageui and read_image.
2) Can the input image just be a subset of the entire image and sampler be the subset of the input image? I don't understand how the coordinates are actually specified here either since read_image() only seams to take a single value for input and a single value for sampler.
3) If doing linear algebra, should I just bite the bullet and translate 1-D array data from the buffer into multi-dim arrays in opencl?
4) I'm still interested in images, so even if what I want to do is not best for images, could you still explain questions 1 and 2?
Thanks!
EDIT
I wanted to refine my question and ask, in the following khronos documentation they define...
int4 read_imagei (
image2d_t image,
sampler_t sampler,
int2 coord)
But nowhere can I find what image2d_t's definition or structure is supposed to be. The samething for sampler_t and int2 coord. They seem like structs to me or pointers to structs since opencl is supposed to be based on ansi c, but what are the fields of these structs or how do I note the coord with what looks like a scala?! I've seen the notation (int2)(x,y), but that's not ansi c, that looks like scala, haha. Things seem conflicting to me. Thanks again!
In general you can read from images in three different ways:
direct pixel access, no sampling
sampling, normalized coordinates
sampling, integer coordinates
The first one is what you want, that is, you pass integer pixel coordinates like (10, 43) and it will return the contents of the image at that point, with no filtering whatsoever, as if it were a memory buffer. You can use the read_image*() family of functions which take no sampler_t param.
The second one is what most people want from images, you specify normalized image coords between 0 and 1, and the return value is the interpolated image color at the specified point (so if your coordinates specify a point in between pixels, the color is interpolated based on surrounding pixel colors). The interpolation, and the way out-of-bounds coordinates are handled, are defined by the configuration of the sampler_t parameter you pass to the function.
The third one is the same as the second one, except the texture coordinates are not normalized, and the sampler needs to be configured accordingly. In some sense the third way is closer to the first, and the only additional feature it provides is the ability to handle out-of-bounds pixel coordinates (for instance, by wrapping or clamping them) instead of you doing it manually.
Finally, the different versions of each function, e.g. read_imagef, read_imagei, read_imageui are to be used depending on the pixel format of your image. If it contains floats (in each channel), use read_imagef, if it contains signed integers (in each channel), use read_imagei, etc...
Writing to an image on the other hand is straightforward, there are write_image{f,i,ui}() functions that take an image object, integer pixel coordinates and a pixel color, all very easy.
Note that you cannot read and write to the same image in the same kernel! (I don't know if recent OpenCL versions have changed that). In general I would recommend using a buffer if you are not going to be using images as actual images (i.e. input textures that you sample or output textures that you write to only once at the end of your kernel).
About the image2d_t, sampler_t types, they are OpenCL "pseudo-objects" that you can pass into a kernel from C (they are reserved types). You send your image or your sampler from the C side into clSetKernelArg, and the kernel gets back a sampler_t or an image2d_t in the kernel's parameter list (just like you pass in a buffer object and it gets a pointer). The objects themselves cannot be meaningfully manipulated inside the kernel, they are just handles that you can send into the read_image/write_image functions, along with a few others.
As for the "actual" low-level difference between images and buffers, GPU's often have specially reserved texture memory that is highly optimized for "read often, write once" access patterns, with special texture sampling hardware and texture caches to optimize scatter reads, mipmaps, etc..
On the CPU there is probably no underlying difference between an image and a buffer, and your runtime likely implements both as memory arrays while enforcing image semantics.

Matlab griddata equivalent in C++

I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.

OBJ, Buffer objects, and face indices

I most recently had great progress in getting Vertex buffer objects to work.
So I moved on to Element arrays and I figured with such implemented I could then load vertices and face data from an obj.
I'm not too good at reading files in c++ so I wrote a python doc to parse the obj and write 2 separate txts to give me a vertex array and face indices and pasted them directly in my code. Which is like 6000 lines but it works (without compiling errors).
And Here's what it looks like
.
I think they're wrong. I'm not sure. The order of the vertices and faces aren't changed just extracted from the obj because I don't have normals or textures working for buffer objects yet. I kinda do if you look at the cube but not really.
Heres the render code
void Mesh_handle::DrawTri(){
glBindBuffer(GL_ARRAY_BUFFER,vertexbufferid);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,elementbufferid);
int index1=glGetAttribLocation(bound_program,"inputvertex");
int index2=glGetAttribLocation(bound_program,"inputcolor");
int index3=glGetAttribLocation(bound_program,"inputtexcoord");
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,9*sizeof(float),0);
glEnableVertexAttribArray(index2);
glVertexAttribPointer(index2,4,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(3*sizeof(float)));
glEnableVertexAttribArray(index3);
glVertexAttribPointer(index3,2,GL_FLOAT,GL_FALSE,9*sizeof(float),(void*)(7*sizeof(float)));
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
}
My python parser which just writes the info into a file: source
The object is Ezreal from League of Legends
I'm not sure if I'm reading the faces wrong or if their not even what I thought they were. Am I suppose to use GL_TRIANGLE_STRIP or something else. Any hints or request more info.
Indices in obj-files are 1 based, so you have to subtract 1 from all indices in order to use them with OpenGL.
First, as Andreas stated, .obj files use 1-based indices, so you need to convert them to 0-based indices.
Second:
glDrawArrays(GL_TRIANGLE_STRIP,0,elementcount);
//glDrawElements(GL_TRIANGLE_STRIP,elementcount,GL_UNSIGNED_INT,0);
Unless you did some special work to turn the face list you were given in your .obj file into a triangle strip, you don't have triangle strips. You should be rendering GL_TRIANGLES, not strips.
From the image for sure your verticies are messed up. It looks like you specified a stride of 9*sizeof(float) in your glGetAttribLocation but from what I can tell from your code your array is tightly packed.
glEnableVertexAttribArray(index1);
glVertexAttribPointer(index1,3,GL_FLOAT,GL_FALSE,0,0);
Also remove stride from color/texture coords.