Mapbox-GL why not using AGG for map rendeing? - c++

AGG (Anti-Grain Geometry) is a High Quality Rendering Engine for C++.
OpenGL ES is a royalty-free, cross-platform API for full-function 2D and 3D graphics on embedded systems.
But AGG seems more efficient than OpenGL ES in map rendeing, like Mapnik is using AGG.
Q1: Mapbox-GL why not use AGG but use OpenGL?
Q2: What's the difference between AGG and OpenGL ES?
Thanks! :)

OpenGL is an API for managing buffers on a GPU and specifying functions to map data between them; having originally been for the rendering of 3d geometry it's still primarily oriented around that goal. It's an open standard with 25 years of history that is implemented by all of the major vendors on all of the major operating systems and a subset of which is now even incorporated into standards-compliant web browsers.
Anti-Grain Geometry is a CPU-based 2d rasterisation library from a single vendor that appears to have started somewhere around 2001 and hasn't seen any web page updates since 2007. The most recent post to its mailing list is about its fractured state due to various independent downstream patches.
A developer might prefer AGG to OpenGL because the latter is very low level and not especially developer friendly. It provides very little unless you put the effort in and debugging tools are often poor. The former appears to be a high-level library which, since it operates on the CPU, will be amenable to your normal debugger.
However, AGG isn't accelerated, has no clear ownership or future, has no forum for governorship and isn't widely available.

Re Q1 and Q2:
OpenGL/-ES is usually GPU accelerated (in fact on most platforms with OpenGL-ES support, OpenGL-ES is available only if a GPU is present). AGG is a software rasterizer.
Thus if a GPU is present it's usually more efficient/performant to use OpenGL/-ES if the intention to generate output for an interactive (realtime) display.

Related

Direct3D 11.1's target-independent rasterization (TIR) equivalent in OpenGL (including extensions)

Target-independent rasterization (TIR) is a new hardware feature in DirectX 11.1, which Microsoft used to improve Direct2D in Windows 8. AMD claimed that TIR improved performance in 2D vector graphics by some 500%. And there was some "war of words" with Nvidia's because Kepler GPUs apparently don't support TIR (among other DirectX 11.1 features). The idea of TIR appears to have originated at Microsoft, because they have a patent application for it.
Now Direct2D is fine your OS is Windows, but is there some OpenGL (possibly vendor/AMD) extension that provides access to the same hardware/driver TIR thing? I think AMD is in a bit of a weird spot because there is no vendor-independent 2D vector graphics extension for OpenGL; only Nvidia is promoting NV_path_rendering for now and its architecture is rather different from Direct2D. So it's unclear where anything made by AMD to accelerate 2D vector graphics can plug (or show up) in OpenGL, unlike in the Direct2D+Direct3D world. I hope I my pessimism is going to be unraveled by a simple answer below.
I'm actually posting an update of sorts here because there's not enough room in comment-style posts for this. There seems to be a little confusion as to what TIR does, which is not simply "a framebuffer with no storage attached". This might be because I've only linked above to the mostly awful patentese (which is however the most detailed document I could find on TIR). The best high-level overview of TIR I found is the following snippet from Sinofsky's blog post:
to improve performance when rendering irregular geometry (e.g. geographical borders on a map), we use a new graphics hardware feature called Target Independent Rasterization, or TIR.
TIR enables Direct2D to spend fewer CPU cycles on tessellation, so it can give drawing instructions to the GPU more quickly and efficiently, without sacrificing visual quality. TIR is available in new GPU hardware designed for Windows 8 that supports DirectX 11.1.
Below is a chart showing the performance improvement for rendering anti-aliased geometry from a variety of SVG files on a DirectX 11.1 GPU supporting TIR: [chart snipped]
We worked closely with our graphics hardware partners [read AMD] to design TIR. Dramatic improvements were made possible because of that partnership. DirectX 11.1 hardware is already on the market today and we’re working with our partners to make sure more TIR-capable products will be broadly available.
It's this bit of hardware I'm asking to use from OpenGL. (Heck, I would settle even for invoking it from Mantle, because that also will be usable outside of Windows.)
The OpenGL equivalent of TIR is EXT_raster_multisample.
It's mentioned in the new features page for Nvidia's Maxwell architecture: https://developer.nvidia.com/content/maxwell-gm204-opengl-extensions.
I believe TIR is just a repurposing of a feature nvidia and AMD use for antialiasing.
Nvidia calls it coverage sample antialiasing and their gl extensions is GL_NV_framebuffer_multisample_coverage.
AMD calls it EQAA but they don't seem to have a gl extension.
Just to expand a bit on Nikita's answer, there's a more detailed Nvidia (2017) extension page that says:
(6) How do EXT_raster_multisample and NV_framebuffer_mixed_samples
interact? Why are there two extensions?
RESOLVED: The functionality in EXT_raster_multisample is equivalent to
"Target-Independent Rasterization" in Direct3D 11.1, and is expected to be
supportable today by other hardware vendors. It allows using multiple
raster samples with a single color sample, as long as depth and stencil
tests are disabled, with the number of raster samples controlled by a
piece of state.
NV_framebuffer_mixed_samples is an extension/enhancement of this feature
with a few key improvements:
- Multiple color samples are allowed, with the requirement that the number
of raster samples must be a multiple of the number of color samples.
- Depth and stencil buffers and tests are supported, with the requirement
that the number of raster/depth/stencil samples must all be equal for
any of the three that are in use.
- The addition of the coverage modulation feature, which allows the
multisample coverage information to accomplish blended antialiasing.
Using mixed samples does not require enabling RASTER_MULTISAMPLE_EXT; the
number of raster samples can be inferred from the depth/stencil
attachments. But if it is enabled, RASTER_SAMPLES_EXT must equal the
number of depth/stencil samples.

Does stage3d use OpenGL? or Direct3D when on Windows

WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?
The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal
Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.

How do you render a game without using DirectX or OpenGL?

For example in some games there are 3 different display mode there are
OpenGL
DirectX
Software
What is this software mode? Like, how do programmers make game engine that generates images without using OpenGL or DirectX are there classes in C++ that generates frames?
Software means exactly that: software.
All rendering is is coloring pixels via some algorithm. That algorithm can be done by dedicated hardware, but you could simply implement those functions yourself in actual code. Now, that doesn't mean it's particularly fast; it takes a great deal of skill to implement a triangle rasterizer that has decent speed.
Software Mode can mean two things:
A System-provided Emulation layer. For example DX11 provides the WARP-device where you, as the application programmer, just specify "I want to use WARP" and the rest is done by DirectX. The Emulation Layer basically does Option Number 2:
Do it all by hand. Essentially a hardware accelerated GFX-card mostly only draws triangles. You can write a function that draws the pixels of a textured triangle directly into the screen-memory of the graphics-card. It's not very fast nowadays (that's why hardware-accelerated gfx-cards exist), but that's how it was done in the 80s and 90s when no such cards existed yet.
For a rough examplanation how a texture mapper works just look into the wikpedia article: https://en.wikipedia.org/wiki/Texture_mapping
I'm not aware of any gfx-libs that provide an own software layer, but i'm sure they exist somewhere.
As an example, directx has a layered setup, there is the code interface, which interacts with the HAL, or hardware abstraction layer. Depending on the capabilities of the underlying hardware, the HAL might run some peices of code on the CPU because the drivers reported the GPU doesn't support that feature. (Yes I know this a gross oversimplification)
see: http://msdn.microsoft.com/en-us/library/gg426101(v=vs.85).aspx
and: http://www.codeproject.com/KB/graphics/DirectX_Lessons_2_.aspx

Is OpenGL ES suitable for performing skeletal animations?

I have to start a 3D-Project for mobile platforms. First of all I would like to outline the main aim - skeletal animation. As for the solution I was thinking of OpenGL ES and C++. So the questions are:
Is OpenGL ES robust enough to handle skeletal animation (including those skinning shaders)
Is OpenGL ES supported widely across mobile platforms, and what are the most famous ones? (for instance, is iPad supported?)
Is this possible anyway, I mean will I have enough computation power?
Is it worth using XNA math library, because of its SIMD optimization (though I'm really unsure that SIMD is supported on mobile platforms, but who knows...).
Is it good to use C++ for this? If yes, then which compiler should I choose for development and testing? Moreover, I have no clue what compilers are used for mobile platforms?
As you might have got it - I've never programmed for mobile platforms yet. Therefore, some general recommendations are welcome.
Yes, OpenGL ES 2.0 can handle vertex skinning for skeletal animations quite well. OpenGL ES 1.1 used a fixed function pipeline, without shaders, so it's harder in the older API to do this, but 2.0 adds support for shaders. OpenGL ES 2.0 is present on all shipping iOS devices (the iPhone 3G S and newer supports it, including both iPads), as well as almost all Android devices (I could only find a couple of very low end handsets that didn't). Windows Phone 7 doesn't appear to support OpenGL ES, but I believe BlackBerry does.
If you're interested in this, I highly recommend reading Philip Rideout's book "iPhone 3D Programming". While it has "iPhone" in the title, he uses generic C++ for almost all of the code in the book, so it should translate to other platforms well and should be easy for you to understand. He even has a section in the "Optimizing" chapter with code for performing vertex skinning on OpenGL ES 2.0 and even 1.1. You can grab the sample code for the book here, including a demonstration of this skinning.
C++ is supported on iOS through Objective-C++, where you could set up the platform-specific UI elements in Objective-C and then do all your backend and rendering logic in C++. Again, Philip does this in his book, and you can see in his source code example applications how he structures this. The people at Imagination Technologies have also set up some platform-agnostic scaffolding in their PowerVR SDK, which some people have used for quickly getting their 3-D rendering up and running on mobile devices. Also in that SDK are some great documents about moving from OpenGL to OpenGL ES, as well as performing various effects on these GPUs.
I have heard of some people getting slightly better performance for small vertex sets by performing transformations on-CPU (on iOS this can be done using the Accelerate framework), but I'd imagine that vertex shaders would be much faster for larger geometry. The PowerVR GPUs that I've worked with in mobile devices are much more powerful than you'd think, particularly the new one that ships in the iPad 2.
You'll need to use the Xcode IDE, with either its GCC or LLVM compiler to target iOS devices, but I believe Android has a few more options in that regard.
In short:
Yes, of course. Why not?
Yes, I suppose. What else? DirectX definitely not.
Yes, I suppose. But depends on what else you want to do.
No, at least not just because of SIMD, as I suppose it is not much supported on mobile platforms, at least the SIMD instructions XNA is optimized for.
Yes, why not? I think the i...s mostly use Objective-C, but there should be compilers for C++, too. Just ask google, as I also don't have any mobile experience.

hardware requirement for OpenGL

My knowledge of OpenGL is very little. I was researching on some RTOS for my project. Some sort of light wt. UI is also required. I came across the OpenGL support for some UI package. My doubt is that whether a separate GPU is required for OpenGL or not?
No separate GPU is required, all you need are openGL drivers. There are even openGL software drivers (Mesa) that will render OpenGL onto anything.
Assuming it's a relatively recent RTOS it may support OpenGL-ES which is a reduced subset to support low power/low memory devices.