I was trying to find some examples or documentation as for how to implement functionality using the Unreal 4 ProceduralMeshComponents through code. The documentation of these classes on the website is very sparse and only provides the barest details of how they function:
https://docs.unrealengine.com/latest/INT/BlueprintAPI/Components/ProceduralMesh/index.html
I know, I know, they are already exposed to the Blueprint Editor so I am aware I can use them in the engine itself. However, I want to understand the exact ins and outs of the process, which means that I need to implement this in a project through code.
Also I feel that using these components through Blueprint nodes alone limits the extent of what can be done with this powerful functionality.
I have also made searches for any examples (either on the net or on the forums) but can't find any that don't involve using Blueprints in some way. The other problem is that this functionality was introduced relatively recently, and before this Rama (a stellar Unreal user) had put up a similar API that allowed procedural mesh generation. However it is deprecated now and there are many examples that refer to that version instead.
Don't get me wrong, I'm not dissing Blueprints here. I love the tool and consider them one of the best bits of Unreal 4. But for my purpose I require the process to be completely exposed to me from start to finish.
I would appreciate any resources or examples that you could share that implement the Unreal Procedural Mesh classes completely through code towards some effect.
This is quite a big question, since Procedural Mesh Components can be used in very many ways. They just represent an arbitrary runtime-generated mesh.
Most of the functions listed in the documentation are quite self-explanatory if you know the data representations of meshes in 3D applications.
A mesh can have several LODs. Each individual mesh LOD is made up of sections. Each section can have a visual representation and a collision representation. For the "visual" representation, there are lists of point locations, lists of triangles that are represented by three point indices, lists of edges connecting two points, lists of normals for each triangle, lists of UV-space positions for each point, etc. For "collision" representation of the meshes, there are of course separate lists, in most cases smaller in size for more optimized calculation.
Depending on your use case, you fill these lists with the data you need. This data can be generated in whatever way of course, that is up to you. Whatever you do, you just need to have some arrays of the needed stuff, be it points, edges, etc.
can't find any that don't involve using Blueprints in some way
The beauty of UE is, any “blueprint” example can act as a C++ example. You can recreate a BP graph on code easily one-to-one, since BP nodes are based on C++ functions which are marked as UFUNCTION(BlueprintCallable). For the C++ versions of the functions, you can see the C++ source by double clicking the node.
p.s. I understand it might not be the best that I don't provide you code examples, but instead a long-winded explanation, but for a question this wide there's really no one-size-fits-all code snippet.
Plus, procedural mesh generation is a massive world of its own, with each individual scenario requiring thousands of lines of specialized code. If you make a procedural terrain vs a procedural animal vs a procedural chair, you can imagine the code is both very complex and so specific it's nearly useless for the other use cases.
Related
I'm working in C++ with large voxel grids in a scientific context and I'm trying to decide, which library to use. Only a fraction of the voxel grid holds values - but might be several per voxel (e.g. struct), which are determined by raytracing. I'm not trying to render anything, but I have to determine the potential number of rays passing though the entire target area, thus an awful lot of ray-box computations will have to be caluculated and preferebly very fast...
So far, I found
OpenVDB http://www.openvdb.org/
Field3d http://sites.google.com/site/field3d/
The latter appeals a bit more, because it seems simpler/easier to use.
My question is: Which of them would be more suited if put to use in tasks, which are not aimed at rendering/visualization? Which one is faster/better when computing a lot of ray-box-intersections (no viewpoint-dependent culling possible)? Suggestions, anyone?
In any case, I want to use an existing C++ library and not write a kdTree/Octree etc. myself. Don't have the time for inventing the wheel anew.
I would advise
OpenSceneGraph
Ogre3D
VTK
I have personally used the first two. However, VTK is also a popular alternative. All three of them support voxel based rendering.
In my program I am dealing with tracking device movement with CMMotionManager and use quaternion representation of device attitude. For positioning the device in global coordinate system I need to do some basic calculations involving quaternion. For that I want to write some Quaternion class with all needed functions implemented. Does somebody know what is the proper way or maybe some general guidelines how it should be done.
I've found a sample project from Apple which has Quaternion, Vector2 and Vector3 classes written in cpp, but I think that it's not very easy to use them in Cocoa, since I can't define properties of the object in Obj-C header file using these c++ classes. Or am I wrong?
Thank you.
You're quite possibly not interested in OpenGL at all but as part of GLKit, Apple supplies GLKQuaternion. It has a C interface but should be easy to extrapolate into a class. Using it is recommended over expressing the mathematics directly since it likely uses the maths vector unit as fully as possible.
Alternatively you can declare C++ and Objective-C uses of GLKQuaternion directly since both superset C.
You're asking multiple questions here. How to implement a Quaternion class, how to use them to represent orientation (attitude), how to integrate a C++ implementation with Objective C, how to integrate all that into Cocoa. I'll answer the last with a question: Does Cocoa really need to know what you are using under the hood to represent attitude?
There are lots of existing packages out there that use quaternions to represent orientation in three dimensional space. Eigen is one. There are lots of others. Don't reinvent the wheel. There are some gotchas that you do need to beware of, particularly when using a third party package. See View Matrix from Quaternion . If you scroll down and look at some of the other answers, you'll see that two of the packages mentioned by others are subject to the very issues I talked about in my answer.
Note well: I am not saying don't use quaternions. That would be rather hypocritical; I use them all the time. They work very nicely as a compact means of representing rotation. You just need to beware of issues, mainly because too many people who use them / implement software for them are clueless regarding those issues.
Pretty much the title explains what I try to achieve.
Having a set of points in two dimensions, I want to create somehow the curve tha passes from all these points.
Whether it will be a graphical window with the mathematical curve or just a jpg produced, is of no importance.
Any help? Thx!
First of all, please refrain from tagging questions with C and C++, or using the term C/C++. C and C++ are two distinct, very different languages.
That being said, it seems you are looking for a way to plot data. There are different libraries allowing you to do that, among those are:
http://codecutter.org/tools/koolplot/
http://www.gnu.org/software/plotutils/
http://www.mps.mpg.de/dislin/
You can integrate those libraries into your application to produce plots of your data points. There are of course different, additional libraries, but these are the ones that came to my mind first.
I'm wondering if there is a way to extract the necessary data out of an autocad .dxf file, so I can visualize the structure in opengl?
I've found some old cold snippets for windows written in cpp but since the standard changes I assume 15 yr old code is a little outdated.
Also, there is a book about the .dxf file standard but it's also from the 90's and aside that, rarely available.
Another way might be to convert it to some other file format and then extract the data I need.
Trying to look into the .dxf files didn't give too much insight either since a simple cuboid contains a lot of data already!
Can anyone give me hint on how to approach this?
The references are a good place to start, but if you are doing heavy 3D work it may not be possible to accomplish what you are attempting..
We recently wrote a DXF converter in JAVA based entirely on the references. Although many of the entities are relatively straightfoward, many other entities (3DSOLID, BODY, REGION, SURFACE, Swept Surface) are not really possible to translate, since the reference states that the groups are primarily proprietary data. Other objects (Extruded Surface, Revolved Surface, Swept Surface (again)) have significant chunks of binary data which may hold important information you need.
These entities were not vital for our efforts, but if you are looking to convert to OpenGL, these may be the entities you were particularly concerned with.
Autodesk has references for the DXF formats used by recent revisions of AutoCAD. I'd probably take a second look at that 15 year-old code though. Even if you can't/don't use it as-is, it may provide a decent starting point. The DXF specification is sufficiently large and complex that having something to start from, and just add new bits and pieces where needed can be a big help. As an interchange format, DXF has to be pretty conservative anyway, only including elements that essentially all programs can interpret reasonably directly.
I'd probably be more concerned about the code itself than changes in the DXF format. A lot of code that old uses deep, monolithic class hierarchies that's quite a bit different from what you'd expect in modern C++.
Does anyone have some useful beginner tutorials and code snippets for playing with basic geometric shapes and geometric proofs in code?
In particular something with the ability to easily create functions and recursively draw them on the screen. Additional requirements, but not absolute, support for Objective-C and basic window drawing routines for OS X and Cocoa.
A specific question how would one write a test to validate that a shape is in fact a square, triangle, etc. The idea being that you could draw a bunch of shapes, fit them together and test and analyze the emergent shape that arises from the set of sub shapes.
This is not a homework question. I am not in school. Just wanted to experiment with drawing code and geometry. And looking for an accessible way to play and experiment with shapes and geometry programming.
I am open to Java and Processing, or Actionscript/Haxe and Flash but would also like to use Objective C and Xcode to build projects as well.
What I am looking for are some clear tutorials to get me started down the path.
Some specific applications include clear examples of how to display for example parts of a Cantor Set, Mandelbrot Set, Julia set, etc...
One aside, I was reading on Wikipedia about the "Russell's Paradox". And the wiki article stated:
Let us call a set "abnormal" if it is
a member of itself, and "normal"
otherwise. For example, take the set
of all squares. That set is not itself
a square, and therefore is not a
member of the set of all squares. So
it is "normal". On the other hand, if
we take the complementary set that
contains all non-squares, that set is
itself not a square and so should be
one of its own members. It is
"abnormal".
The point about squares seems intuitively wrong to me. All the squares added together seem to imply a larger square. Obviously I get the larger paradox about sets. But what I am curious about is playing around with shapes in code and analyzing them empirically in code. So for example a potential routine might be draw four squares, put them together with no space between them, and analyze the dimensions and properties of the new shape that they make.
Perhaps even allowing free hand drawing with a mouse. But for now just drawing in code is fine.
If you're willing to use C++ I would recommend two libraries:
boost::GGL generic geometry library handles lots of geometric primitives such as polygons, lines, points, and so forth. It's still pretty new, but I have a feeling that it's going to be huge when it's officially added into boost.
CGAL, the Computational Geometry Algorithms Library: this thing is huge, and will do almost anything you'll ever need for geometry programming. It has very nice bindings for Qt as well if you're interested in doing some graphical stuff.
I guess OpenGL might not be the best starting point for this. It's quite low-level, and you will have to fight with unexpected behavior and actual driver issues. If you emphasize the "playing" part, go for Processing. It's a programming environment specifically designed to play with computer graphics.
However, if you really want to take the shape testing path, an in-depth study of computer vision algorithms in inevitable. On the other hand, if you just want to compare your shapes to a reference image, without rotation, scaling, or other distortions, the Visual Difference Predictor library might help you.
I highly recommend NeHe for any beginner OpenGL programmer, once you complete the first few tutorials you should be able to have fun with geometry any way you want.
Hope that helps