For example in some games there are 3 different display mode there are
OpenGL
DirectX
Software
What is this software mode? Like, how do programmers make game engine that generates images without using OpenGL or DirectX are there classes in C++ that generates frames?
Software means exactly that: software.
All rendering is is coloring pixels via some algorithm. That algorithm can be done by dedicated hardware, but you could simply implement those functions yourself in actual code. Now, that doesn't mean it's particularly fast; it takes a great deal of skill to implement a triangle rasterizer that has decent speed.
Software Mode can mean two things:
A System-provided Emulation layer. For example DX11 provides the WARP-device where you, as the application programmer, just specify "I want to use WARP" and the rest is done by DirectX. The Emulation Layer basically does Option Number 2:
Do it all by hand. Essentially a hardware accelerated GFX-card mostly only draws triangles. You can write a function that draws the pixels of a textured triangle directly into the screen-memory of the graphics-card. It's not very fast nowadays (that's why hardware-accelerated gfx-cards exist), but that's how it was done in the 80s and 90s when no such cards existed yet.
For a rough examplanation how a texture mapper works just look into the wikpedia article: https://en.wikipedia.org/wiki/Texture_mapping
I'm not aware of any gfx-libs that provide an own software layer, but i'm sure they exist somewhere.
As an example, directx has a layered setup, there is the code interface, which interacts with the HAL, or hardware abstraction layer. Depending on the capabilities of the underlying hardware, the HAL might run some peices of code on the CPU because the drivers reported the GPU doesn't support that feature. (Yes I know this a gross oversimplification)
see: http://msdn.microsoft.com/en-us/library/gg426101(v=vs.85).aspx
and: http://www.codeproject.com/KB/graphics/DirectX_Lessons_2_.aspx
Related
"The rendering engine generates animated 3D graphics by any of a number of methods (rasterization, ray-tracing etc.).
Instead of being programmed and compiled to be executed on the CPU or GPU directly, most often rendering engines are built upon one or multiple rendering application programming interfaces (APIs), such as Direct3D, OpenGL, or Vulkan which provide a software abstraction of the graphics processing unit (GPU). Low-level libraries such as DirectX, Simple DirectMedia Layer (SDL), and OpenGL are also commonly used in games as they provide hardware-independent access to other computer hardware such as input devices (mouse, keyboard, and joystick), network cards, and sound cards." - Game Engine
"UNISURF was a pioneering surface CAD/CAM system, designed to assist with car body design and tooling. It was developed by French engineer Pierre Bézier for Renault in 1968, and entered full use at the company in 1975.[1][2] By 1999, around 1,500 Renault employees made use of UNISURF for car design and manufacture." Advent of CAD/CAM Systems
"A geometric modeling kernel is a 3D solid modeling software component used in computer-aided design packages" Geometric Modeling Kernel
I am struggling to understand an underlying architecture of a geometric modeling kernels compared to the game engines and physics engines.
Questions:
Am I understand it correctly, that the geometric modeling kernels, are actually the low-level APIs, more specifically, a kernel loadable extensions, used specifically to handle the rendering of a geometric operations, like creating a boundary representation of an objects on the screen?
How are the geometric modeling kernels differ from the OpenGL-derived APIs? Are they also written in C++, or an older languages, since, I believe, they have appeared earlier?
Am I understand correctly, that the geometric modeling kernels, like ACIS, Parasolid are continuing to use it's own, proprietary, low-level modules, instead of OpenCL/OpenGL, or they are kind of mixed?
What is the architecture of the physics engine, in terms of an APIs. Is it using the OpenGL or other derived low-level graphics APIs? Let's say, Havoc, is it relying on other low-level API, say Direct3D?
Geometric modeling kernel is a modeling kernel, it allows constructing or modifying geometry and has nothing to do with displaying this geometry on the screen. It also differs from model sculpturing applications, because the latter ones are used by artists, while modeling kernels are used by engineers, and hence, has very different inputs, even when constructing visually similar model.
Modern modeling kernels are usually accompanied by 3D renderer for displaying models. But this functionality is usually put into dedicated components within the framework. Platforms have only limited set of hardware accelerated graphics libraries like OpenGL, Vulkan and Direct3D, so that 3D graphics engine coming with modeling kernel usually relies on one of lower-level libraries. Historically, OpenGL was used by majority of industrial applications (in contrast to Games), but this might be not the same today.
The language in which modeling kernel is written might differ, but I believe the most are written in C++. As modeling kernels were started to be written in older days, they might inherit some intermediate languages, like CDL in OCCT (remnants have been removed since OCCT 7.0.0) or code originating from other languages (like from FORTRAN, popular in the past) - the modeling kernels most likely don't use these languages, but it might be figured out from source code, that C++ code of some algorithms was converted from FORTRAN at some step (but of course, you cannot check this point with proprietary kernels).
If you take a look onto components structure of Open CASCADE Technology, an open source solid modeling kernel, you will find that Visualization component implements interactive services for displaying models using OpenGL or other low-level graphic library, but OCCT-based application does not have to use it, and may consider displaying shapes using other libraries.
In attempt to generalize:
Graphics engine implements services for rendering existing geometry and implemented on top of low-level APIs like OpenGL. This includes implementation of shading/material model (Phong, PBR metallic-roughness), Camera definition and a bunch of other tools, not coming with low-level APIs.
Geometric modeling kernel implements data structures (like Boundary Representation or CSG), complex math for model construction by engineers (including primitives, fillets, Boolean operations) on exact geometry represented by B-Splines and similar (in contrast to artists-orientated tools which usually works on polygonal geometry). The framework may provide other tools, including graphics engine, but they usually separated from geometry kernel. Graphics engines usually do not work directly with B-Spline geometry, so that Geometry modeling kernel has to generate triangulation for rendering the geomery.
Physics engine implements only services related to physics simulation. Physics engine includes collision detection module. The project may also contains samples using some graphics library, but the kernel should not depend on any.
Game engine combines Graphics engine, Physics engine, Audio engine, and usually also provides some scripting language and other tools for simplifying game development.
AGG (Anti-Grain Geometry) is a High Quality Rendering Engine for C++.
OpenGL ES is a royalty-free, cross-platform API for full-function 2D and 3D graphics on embedded systems.
But AGG seems more efficient than OpenGL ES in map rendeing, like Mapnik is using AGG.
Q1: Mapbox-GL why not use AGG but use OpenGL?
Q2: What's the difference between AGG and OpenGL ES?
Thanks! :)
OpenGL is an API for managing buffers on a GPU and specifying functions to map data between them; having originally been for the rendering of 3d geometry it's still primarily oriented around that goal. It's an open standard with 25 years of history that is implemented by all of the major vendors on all of the major operating systems and a subset of which is now even incorporated into standards-compliant web browsers.
Anti-Grain Geometry is a CPU-based 2d rasterisation library from a single vendor that appears to have started somewhere around 2001 and hasn't seen any web page updates since 2007. The most recent post to its mailing list is about its fractured state due to various independent downstream patches.
A developer might prefer AGG to OpenGL because the latter is very low level and not especially developer friendly. It provides very little unless you put the effort in and debugging tools are often poor. The former appears to be a high-level library which, since it operates on the CPU, will be amenable to your normal debugger.
However, AGG isn't accelerated, has no clear ownership or future, has no forum for governorship and isn't widely available.
Re Q1 and Q2:
OpenGL/-ES is usually GPU accelerated (in fact on most platforms with OpenGL-ES support, OpenGL-ES is available only if a GPU is present). AGG is a software rasterizer.
Thus if a GPU is present it's usually more efficient/performant to use OpenGL/-ES if the intention to generate output for an interactive (realtime) display.
WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.
How can I draw a pixel array very fast in c++?
I've seen many questions like this on stackoverflow,
they are all answered with:
use gdi (windows)
use opengl
...
but there must be a way, how opengl is doing it!
I'm writing a little raytracer and need to draw every pixel
many times per second.
opengl is able to do it, platform independent and fast,
so how can i achieve that without opengl?
And "without opengl" dos not mean
use sdl (slow)
use this / that library
Please only suggest the platform native methods
or the library closest to that.
If it is possible (i know it is)
how can I do this?
platform independent solutions are preferred.
Drawing graphics on Linux you either have to use X11, or OpenGL. (And in the near future Wayland may be another option). In Linux there is no "native" way of doing graphics, because the Linux kernel doesn't care about graphics APIs. It provides a interfaces (DRM) using which graphics systems are then implemented in user space. If you just want to splat pixels on the screen, without caring about windows then you could also mmap /dev/fbdev – but you normally don't want that, because nobody wants his screen being clobbered by some program he can't move or hide.
Drawing single points is inefficient, no matter which API being uses, due to the protocol overhead.
So X11 it is. So the best bet is to use the MIT-SHM extension which you use to alter pixels in a buffer, which is then blitted in whole by the X11 server. Of course doing this using the pure X11 Xlib functions is annoyingly cumbersome. So this is what SDL effectively nicely wraps up for you.
The other option is OpenGL. OpenGL is not a library! It's a system level API, that gives you almost direct access to the GPU. And it integrates nicely with X11. Yes, the API is provided through a library that's being loaded, but technically that library is just a "wrapper" or "interface" to the actual driver. Drawing single points with OpenGL makes no sense. But you can "batch up" several points into a list (using a vertex array) and then process that list. So the idea is to collect all the incoming points between two display refresh intervals and draw them in one single batch.
platform independent solutions are preferred.
Why are you asking about native APIs then? By definition there can be no plattform independent native API. Either you're native, or you're plattform independent.
And in your particular scenario I think SDL would be the best solution, because it offers just the right kind of abstraction and program side interface for a raytracer. Just FYI: Virtual Machines like QEmu use SDL.
Or you use OpenGL which is a real plattform neutral API widely supported.
Drawing graphics on Linux you either have to use X11, or OpenGL.
This is absolutely false! Counterexample: there's platforms that don't run X11, yet they display pixels (eg. fonts).
Sidenote. OpenGL usually depends on X11 (it's possible, albeit hard, to run OpenGL without X11).
As #datenwork says, there's at least 2 other ways to draw pixels:
The framebuffer device (fbdev), an abstraction to interface with graphics hardware. Very old, designed by Martin Schaller, see the kernel docs. Source code is here. Also see here. Here's the simplest possible framebuffer driver.
The Direct Rendering Manager (DRM), a kernel subsystem that provides an API for userland apps to send commands/data directly to the GPU. (Seems suspiciously similar to what OpenGL does, but idk!). Source code is here. Here's a DRM example that inititializes a simple display pipeline.
Both of these are part of the kernel, so they're lower-level than X11, which is not part of the kernel. Both can draw arbitrary pixels (eg. penguins). I'd guess both of these are platform-independent (like OpenGL).
See this for more on how to draw stuff on Linux.
I am going to make a lightweight, fast image viewer. I am curious as to which would be better for this project. SFML (using opengl) or SDL (using software rendering). My assumption is hardware rendering with opengl should be faster. Is this right?
Well, as opposed to SDL. Though software rendering is usually faster, SDL is very high level and therefore slower. Downvote me if you want, but this is the truth. May I ask why you can't just use the operating system's API? Image controls are very versatile.