"The rendering engine generates animated 3D graphics by any of a number of methods (rasterization, ray-tracing etc.).
Instead of being programmed and compiled to be executed on the CPU or GPU directly, most often rendering engines are built upon one or multiple rendering application programming interfaces (APIs), such as Direct3D, OpenGL, or Vulkan which provide a software abstraction of the graphics processing unit (GPU). Low-level libraries such as DirectX, Simple DirectMedia Layer (SDL), and OpenGL are also commonly used in games as they provide hardware-independent access to other computer hardware such as input devices (mouse, keyboard, and joystick), network cards, and sound cards." - Game Engine
"UNISURF was a pioneering surface CAD/CAM system, designed to assist with car body design and tooling. It was developed by French engineer Pierre Bézier for Renault in 1968, and entered full use at the company in 1975.[1][2] By 1999, around 1,500 Renault employees made use of UNISURF for car design and manufacture." Advent of CAD/CAM Systems
"A geometric modeling kernel is a 3D solid modeling software component used in computer-aided design packages" Geometric Modeling Kernel
I am struggling to understand an underlying architecture of a geometric modeling kernels compared to the game engines and physics engines.
Questions:
Am I understand it correctly, that the geometric modeling kernels, are actually the low-level APIs, more specifically, a kernel loadable extensions, used specifically to handle the rendering of a geometric operations, like creating a boundary representation of an objects on the screen?
How are the geometric modeling kernels differ from the OpenGL-derived APIs? Are they also written in C++, or an older languages, since, I believe, they have appeared earlier?
Am I understand correctly, that the geometric modeling kernels, like ACIS, Parasolid are continuing to use it's own, proprietary, low-level modules, instead of OpenCL/OpenGL, or they are kind of mixed?
What is the architecture of the physics engine, in terms of an APIs. Is it using the OpenGL or other derived low-level graphics APIs? Let's say, Havoc, is it relying on other low-level API, say Direct3D?
Geometric modeling kernel is a modeling kernel, it allows constructing or modifying geometry and has nothing to do with displaying this geometry on the screen. It also differs from model sculpturing applications, because the latter ones are used by artists, while modeling kernels are used by engineers, and hence, has very different inputs, even when constructing visually similar model.
Modern modeling kernels are usually accompanied by 3D renderer for displaying models. But this functionality is usually put into dedicated components within the framework. Platforms have only limited set of hardware accelerated graphics libraries like OpenGL, Vulkan and Direct3D, so that 3D graphics engine coming with modeling kernel usually relies on one of lower-level libraries. Historically, OpenGL was used by majority of industrial applications (in contrast to Games), but this might be not the same today.
The language in which modeling kernel is written might differ, but I believe the most are written in C++. As modeling kernels were started to be written in older days, they might inherit some intermediate languages, like CDL in OCCT (remnants have been removed since OCCT 7.0.0) or code originating from other languages (like from FORTRAN, popular in the past) - the modeling kernels most likely don't use these languages, but it might be figured out from source code, that C++ code of some algorithms was converted from FORTRAN at some step (but of course, you cannot check this point with proprietary kernels).
If you take a look onto components structure of Open CASCADE Technology, an open source solid modeling kernel, you will find that Visualization component implements interactive services for displaying models using OpenGL or other low-level graphic library, but OCCT-based application does not have to use it, and may consider displaying shapes using other libraries.
In attempt to generalize:
Graphics engine implements services for rendering existing geometry and implemented on top of low-level APIs like OpenGL. This includes implementation of shading/material model (Phong, PBR metallic-roughness), Camera definition and a bunch of other tools, not coming with low-level APIs.
Geometric modeling kernel implements data structures (like Boundary Representation or CSG), complex math for model construction by engineers (including primitives, fillets, Boolean operations) on exact geometry represented by B-Splines and similar (in contrast to artists-orientated tools which usually works on polygonal geometry). The framework may provide other tools, including graphics engine, but they usually separated from geometry kernel. Graphics engines usually do not work directly with B-Spline geometry, so that Geometry modeling kernel has to generate triangulation for rendering the geomery.
Physics engine implements only services related to physics simulation. Physics engine includes collision detection module. The project may also contains samples using some graphics library, but the kernel should not depend on any.
Game engine combines Graphics engine, Physics engine, Audio engine, and usually also provides some scripting language and other tools for simplifying game development.
Related
AGG (Anti-Grain Geometry) is a High Quality Rendering Engine for C++.
OpenGL ES is a royalty-free, cross-platform API for full-function 2D and 3D graphics on embedded systems.
But AGG seems more efficient than OpenGL ES in map rendeing, like Mapnik is using AGG.
Q1: Mapbox-GL why not use AGG but use OpenGL?
Q2: What's the difference between AGG and OpenGL ES?
Thanks! :)
OpenGL is an API for managing buffers on a GPU and specifying functions to map data between them; having originally been for the rendering of 3d geometry it's still primarily oriented around that goal. It's an open standard with 25 years of history that is implemented by all of the major vendors on all of the major operating systems and a subset of which is now even incorporated into standards-compliant web browsers.
Anti-Grain Geometry is a CPU-based 2d rasterisation library from a single vendor that appears to have started somewhere around 2001 and hasn't seen any web page updates since 2007. The most recent post to its mailing list is about its fractured state due to various independent downstream patches.
A developer might prefer AGG to OpenGL because the latter is very low level and not especially developer friendly. It provides very little unless you put the effort in and debugging tools are often poor. The former appears to be a high-level library which, since it operates on the CPU, will be amenable to your normal debugger.
However, AGG isn't accelerated, has no clear ownership or future, has no forum for governorship and isn't widely available.
Re Q1 and Q2:
OpenGL/-ES is usually GPU accelerated (in fact on most platforms with OpenGL-ES support, OpenGL-ES is available only if a GPU is present). AGG is a software rasterizer.
Thus if a GPU is present it's usually more efficient/performant to use OpenGL/-ES if the intention to generate output for an interactive (realtime) display.
WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.
For example in some games there are 3 different display mode there are
OpenGL
DirectX
Software
What is this software mode? Like, how do programmers make game engine that generates images without using OpenGL or DirectX are there classes in C++ that generates frames?
Software means exactly that: software.
All rendering is is coloring pixels via some algorithm. That algorithm can be done by dedicated hardware, but you could simply implement those functions yourself in actual code. Now, that doesn't mean it's particularly fast; it takes a great deal of skill to implement a triangle rasterizer that has decent speed.
Software Mode can mean two things:
A System-provided Emulation layer. For example DX11 provides the WARP-device where you, as the application programmer, just specify "I want to use WARP" and the rest is done by DirectX. The Emulation Layer basically does Option Number 2:
Do it all by hand. Essentially a hardware accelerated GFX-card mostly only draws triangles. You can write a function that draws the pixels of a textured triangle directly into the screen-memory of the graphics-card. It's not very fast nowadays (that's why hardware-accelerated gfx-cards exist), but that's how it was done in the 80s and 90s when no such cards existed yet.
For a rough examplanation how a texture mapper works just look into the wikpedia article: https://en.wikipedia.org/wiki/Texture_mapping
I'm not aware of any gfx-libs that provide an own software layer, but i'm sure they exist somewhere.
As an example, directx has a layered setup, there is the code interface, which interacts with the HAL, or hardware abstraction layer. Depending on the capabilities of the underlying hardware, the HAL might run some peices of code on the CPU because the drivers reported the GPU doesn't support that feature. (Yes I know this a gross oversimplification)
see: http://msdn.microsoft.com/en-us/library/gg426101(v=vs.85).aspx
and: http://www.codeproject.com/KB/graphics/DirectX_Lessons_2_.aspx
I am writing a very simple 3d particle software rendering system, but I am only really interested in coding the particle system and the rasterizer, so what I am looking for, is the easiest way to go from 3d particle coordinates, through camera, to screen coordinates, but with features like variable FOV, and targeted (look at) camera.
Any additional features such as distance from point to point, bounding volumes etc. would be a bonus, but ease of use is more important to me than features.
The only license requirement is that it's free (as in beer).
You probably want a scenegraph library. Many C++ scene graph libraries exist. OpenScenegraph is popular, Coin3D (free for non-commercial use) is an implementation of the OpenInventor spec, any of them would probably fit your need as it doesn't sound like you need any cutting-edge support. There is also Panda3D, which I've heard is good if you're into Python.
You could do all of this in a straight low-level toolkit like OpenGL but without prior experience it takes a lot longer to get a handle on using OpenGL than any of the scenegraph libraries.
When choosing a scenegraph library it will probably just come down to personal preference as to which API you prefer.
Viewing is done with elementary transformations, the same way that model transformations are done. If you want some convenience functions like gluLookAt() in GLU, then I don't know, but it would be really easy to make your own.
If you do want to make your own Look At function etc. then I can recommend eigen which is a really easy to use linear algebra library for C++.
If you're trying to just focus on the rendering portion of a particle system, I'd go with an established 3D rendering library.
Given your description, you could consider trying to just add your particle rasterization to one or both of the software renderers in Irrlicht. One other advantage of this would be that you could compare your results to the DX/OGL particle effect renderers that already exist. All of the plumbing/camera management/etc would be done for you.
You may also want to have a look at the Armadillo C++ library
We are developing some stress and strain analysis software at university. Now it's time to move from rectangles and boxes and spheres to some real models. But I still have little idea where to start.
In our software we are going to build mesh and then make calculations, but how do I import solid bodies from CAD/CAE software?
1) How CAD/CAE models are organised? How solid bodies are represented? What are the possibilities of DWG, DXF, IGES, STEP formats? There is e.g. a complete DXF reference, but it's too difficult for me to understand without knowing basic concepts.
2) Are there C++ libraries to import solid bodies from CAD/CAE file formats? Won't it be too difficult to build a complete model to be able to import comprehensive file?
To import solid bodies you first need to export them from the CAD system. Most CAD system datafiles are propriety (unless they've all moved over to XML in the few years I've been out of the industry!). DWG is Autodesk's file format and they don't (well didn't) encourage people to read it directly. They did offer a file reading/writing library if memory serves, but I don't know what the state of that is now. DXF, IGES and STEP are all data transfer formats.
DXF is owned by Autodesk but is published so other companies can use it to read and write models. The DXF reference is complicated, but is just a reference - you need to know the concepts before you can understand what it represents.
Solid models can be represented in a number of ways, either by Constructional Solid Geometry (CSG) where the shape is made up from the addition or subtraction of solid primitives from each other, or by Boundary Representation (B-Rep) where the edges are stored, or by triangulated faces (as used by 3D Studio MAX, WPF and many others) and so on. The particular format will depend on what the modeller is designed to do.
There are libraries and tools for reading the various file formats. I don't know which ones are still active as it's 5+ years since I was heavily involved in 3D graphics. You'd be better off searching for the current crop yourself. I'd recommend starting with Wikipedia - it will have some articles on 3D graphics and there should be plenty of links to further reading and tools/libraries.
Once you have a reader you'll need to convert the data to your internal format - not a trivial task. You might be better off adopting an existing format. One of my jobs was the reading of models from various sources into my company's data structure. My task was greatly helped by the fact that the modellers we supported came with API's that let us read the model meshes directly and from there it was a relatively straightforward (but never easy) task to convert their mesh into ours. There were always edge cases and nuances of the format that caused headaches. These were multiplied several times over if we had to read the file format ourselves - such as for DXF or VRML.
The most common way solid models are represented in current 3D CAD software (CATIA, Pro/Engineer/Solidworks/NX) is through Boundary Representation (B-REP).
However, most of libraries to import such CAD data are proprietary. Some libraries come directly from geometric modelers (such as ACIS with Interop, Parasolid, or Granite), other are from small software companies specialized on the CAD data translation market.
On the open source side, maybe have look at the OpenCascade kernel. This kernel have been open sourced (mostly), and it have some STEP import and meshing functionalities.
Your best bet is to work with an existing open source CAD system, such as BRL-CAD, that includes support for numerous importers and exporters.
Your intuition that learning a given format would be difficult to understand and implement support for is quite true, particularly when dealing with solid geometry formats intended for analysis purposes. Preserving solidity with topological guarantees is important for producing valid analyses, but rarely addressed by simple mesh formats.
In particular for the two prevalent international standards (IGES and STEP), they are excessively complex to support as they can contain the same solid geometry encoded in numerous ways. Consider a simple sphere example. That sphere could be encoded as a simple point and radius (with no explicit surface information, an implicit form common with CSG usage), it could be a polygonal mesh (lossy BREP facets mesh format), it could be a spline surface (BREP NURBS), it could be volumetric (think CT scan data), and more. Focusing on any one of those involves various tradeoffs (simplicity, solidity, analytic guarantees, flexibility, etc).
As mentioned regarding BRL-CAD, it's a large open source solid modeling system that has a lot of functionality in many areas you could leverage, about a dozen libraries of functionality and more than 400 succinct tools (two dozen or so being geometry converters). Even if it doesn't do exactly what you need, you have the source code and can contribute improvements back and collaborate with an existing community to help implement what you need.
Upon re-reading your question, let me completely change my answer. If you all need is meshes, then just use a simple mesh-based format.
OBJ is simple, good, and very standard. Conversion from many CAD formats to OBJ requires a tessellator/mesher, which you don't want to be writing anyway, just get a seat of a CAD package to do the translation. Moi or Rhino are low-cost, and support many formats.
I regularly work with a piece of commercial software for electromagnetic simulations that uses the ACIS modeling kernel and components from Simmetrix. While I can't personally attest to the ease of using those libraries, they do seem to work as advertised and could save you a lot of work. They may not be available on suitable terms for academic use, but they do seem to be designed to do exactly what you want.
As for i know, all of CAD/CAE softwarea support IGES, STEP etc file formats for geometry and ideas, anysis etc for mesh data. Most of time, we find that iges does not contain topological information. But the development of STEP(Standard for the Exchange of Product) started in 1984 as a successor of IGES.The initial plan was that "STEP shall be based on one single, complete, implementation-independent Product Information Model, which shall be the Master Record of the integrated topical and application information models". We have some libraries to read and write these file format. But as i wrote code to read and write geometries as well as mesh, reading or writing these file formats is not difficult, but alot boring.