Does stage3d use OpenGL? or Direct3D when on Windows - opengl

WebGl is based on OpelGL ES 2.0.
Is it correct to say that Stage3d is also based OpenGL? I mean does it call OpenGL functions? Or ot calles Direct3D when runs on Windows?
If no, could you explain me, what API does Stage3d use for hardware acceleration?

The accepted answer is incorrect unfortunately. Stage 3D uses:
DirectX on Windows systems
OpenGL on OSX systems
OpenGL ES on mobile
Software Renderer when no hardware acceleration is available. (Due to
older hardware or no hardware at all.)
Please see: http://www.slideshare.net/danielfreeman779/adobe-air-stage3d-and-agal

Good day, Stage3D isn't based on anything, it may share similar methodology/terminology. It is another rendering pipeline, this is why Adobe is soo pumped about it.
Have a look at this: http://www.adobe.com/devnet/flashplayer/articles/how-stage3d-works.html
You can skip down to this heading "Comparing the advantages and restrictions of working with Stage3D" to get right down to it.
Also, take a peak at this: http://www.adobe.com/devnet/flashplayer/stage3d.html, excerpt:
The Stage3D APIs in Flash Player and Adobe AIR offer a fully
hardware-accelerated architecture that brings stunning visuals across
desktop browsers and iOS and Android apps enabling advanced 2D and 3D
capabilities. This set of low-level GPU-accelerated APIs provide
developers with the flexibility to leverage GPU hardware acceleration
for significant performance gains in video game development, whether
you’re using cutting-edge 3D game engines or the intuitive, lightning
fast Starling 2D framework that powers Angry Birds.

Related

Mapbox-GL why not using AGG for map rendeing?

AGG (Anti-Grain Geometry) is a High Quality Rendering Engine for C++.
OpenGL ES is a royalty-free, cross-platform API for full-function 2D and 3D graphics on embedded systems.
But AGG seems more efficient than OpenGL ES in map rendeing, like Mapnik is using AGG.
Q1: Mapbox-GL why not use AGG but use OpenGL?
Q2: What's the difference between AGG and OpenGL ES?
Thanks! :)
OpenGL is an API for managing buffers on a GPU and specifying functions to map data between them; having originally been for the rendering of 3d geometry it's still primarily oriented around that goal. It's an open standard with 25 years of history that is implemented by all of the major vendors on all of the major operating systems and a subset of which is now even incorporated into standards-compliant web browsers.
Anti-Grain Geometry is a CPU-based 2d rasterisation library from a single vendor that appears to have started somewhere around 2001 and hasn't seen any web page updates since 2007. The most recent post to its mailing list is about its fractured state due to various independent downstream patches.
A developer might prefer AGG to OpenGL because the latter is very low level and not especially developer friendly. It provides very little unless you put the effort in and debugging tools are often poor. The former appears to be a high-level library which, since it operates on the CPU, will be amenable to your normal debugger.
However, AGG isn't accelerated, has no clear ownership or future, has no forum for governorship and isn't widely available.
Re Q1 and Q2:
OpenGL/-ES is usually GPU accelerated (in fact on most platforms with OpenGL-ES support, OpenGL-ES is available only if a GPU is present). AGG is a software rasterizer.
Thus if a GPU is present it's usually more efficient/performant to use OpenGL/-ES if the intention to generate output for an interactive (realtime) display.

What's the point of Nvidia 3D Vision and AMD HD3D?

Why they promote these weird technologies instead of just supporting OpenGL quad buffering?
Well they say AMD cards beginning with HD6000 support OpenGL quad buffering, yet HD3D is still what you see on the front pages (well, maybe because there is no native DirectX quad buffering support yet)...
Two reasons: Keeping an incentive for professional users who need quadbuffer stereo to buy the professional cards. Now with 3D Vision being pushed so hard a lot of people asked "uncomfortable" questions. The other reason was to try attempting on Vendor Lock in with a custom API, so that 3D Vision games would work only on NVidia hardware.
Similar reasoning on the side of AMD. However FireGL cards didn't keep up with the Radeons and so there's little reason for AMD to make their Radeon cards less attractive to professionals (current AMD FireGL cards can not compete with NVidia Quadros, the Radeons are also the competition for the Quadros), so having quadbuffer OpenGL support for them was the logical decision.
Note that this is a pure marketing decision. There never have been technical reasons of any kind for this artificial limitation of consumer cards.
Windows 8.1 supports Stereoscopic modes right out of the box, in DirectX 11.1.
AMD HD3D and NVidia 3DVision add:
1) Enumeration of Stereo 3D modes on Windows <= 8.1 (on Windows 8.1 the DirectX API provides this)
2) Sending the EDID signal to the monitor to enable/disable 3D on Windows <= 8.1 (on Windows 8.1, the DirectX API provides this)
3) Rendering Left and Right camera in an above/below arrangement -- it tells you the offset to use for the right image. Then, you use standard double buffering instead of Quad. (on Windows 8.1, this is not necessary -- sensing a pattern?)
3DVision adds the following:
1) Support for desktop apps to run in Stereo without engaging full screen mode (and it sometimes actually works).
2) Support for forcing non-stereoscopic games stereoscopic by intercepting the drawing calls. (this works most of the time -- on AMD, you can get the same thing by buying TriDef or iZ3D).
3) A NVidia-standard connector (e.g. proprietary, but common to all NVidia cards) for the IR transmitter and shutter glasses. (AMD, and NVidia can do this as well, uses the HDMI 3D spec and leaves the glasses up to the monitor company)
Note:
The key feature in both cases is being able to enumerate modes that have stereo support, and being able to send the EDID code to the monitor to turn on the stereo display.

Direct2D Equivalent for IOS OSX development

I am developing a user interface for my application.... most of my application is portable as is written in c++ but today I started thinking about the UI. Which is currently written in Direct2D. I was wondering if there was an equivalent for developing a UI in IOS(Ipad), and OSX(MAC)?
Something high level enough that I could draw rectangles and circles, but also low level enough that is not as slow as GDI.
Thanks in advance.
PS. I DON'T want comparing which are better or worse, I just want to know what options I have.
CoreAnimation is a GPU-accelerated framework. Individual views are cached on the GPU. You can then apply composition arbitrary transforms to them. Such transforms are applied by the GPU to the cached image. So you can use CoreGraphics to draw a circle, rectangle or whatever, have CoreAnimation store that bitmap on the GPU and then transform that.
Also from the first-party frameworks, Sprite Kit provides a game-oriented framework that includes game-style (ie, accelerated write-once read-many 'sprites') drawing alongside physics/etc.
OpenGL ES is also fully supported. You can assume 2.0 is always available as it was introduced on the 3GS and Apple no longer accepts binaries for older devices. 3.0 is also available on the latest iPhone. That's obviously quite a bit lower level than Direct2D but Apple supplies GLKit which allows you to upload images trivially and to emulate the old fixed-functionality pipeline with just a few simple calls.
Out in the third-party world I guess the main thing people are going to suggest is Cocos2d but at this point it's already playing catch-up to Sprite Kit.
Of those, CoreAnimation, OpenGL and Cocos2D span iOS and OS X with some minor differences, Sprite Kit is already available on iOS and will turn up in the next OS X Mavericks.
Start with Cocoa (OSX) and Cocoa Touch (iOS). In apps made with those you can use Core Graphics which seems like a good fit for your needs, or OpenGL which is probably overkill. Of course there are many 3rd party libraries you can use, Cocos2d as Petesh mentioned is one of them.

Is OpenGL ES suitable for performing skeletal animations?

I have to start a 3D-Project for mobile platforms. First of all I would like to outline the main aim - skeletal animation. As for the solution I was thinking of OpenGL ES and C++. So the questions are:
Is OpenGL ES robust enough to handle skeletal animation (including those skinning shaders)
Is OpenGL ES supported widely across mobile platforms, and what are the most famous ones? (for instance, is iPad supported?)
Is this possible anyway, I mean will I have enough computation power?
Is it worth using XNA math library, because of its SIMD optimization (though I'm really unsure that SIMD is supported on mobile platforms, but who knows...).
Is it good to use C++ for this? If yes, then which compiler should I choose for development and testing? Moreover, I have no clue what compilers are used for mobile platforms?
As you might have got it - I've never programmed for mobile platforms yet. Therefore, some general recommendations are welcome.
Yes, OpenGL ES 2.0 can handle vertex skinning for skeletal animations quite well. OpenGL ES 1.1 used a fixed function pipeline, without shaders, so it's harder in the older API to do this, but 2.0 adds support for shaders. OpenGL ES 2.0 is present on all shipping iOS devices (the iPhone 3G S and newer supports it, including both iPads), as well as almost all Android devices (I could only find a couple of very low end handsets that didn't). Windows Phone 7 doesn't appear to support OpenGL ES, but I believe BlackBerry does.
If you're interested in this, I highly recommend reading Philip Rideout's book "iPhone 3D Programming". While it has "iPhone" in the title, he uses generic C++ for almost all of the code in the book, so it should translate to other platforms well and should be easy for you to understand. He even has a section in the "Optimizing" chapter with code for performing vertex skinning on OpenGL ES 2.0 and even 1.1. You can grab the sample code for the book here, including a demonstration of this skinning.
C++ is supported on iOS through Objective-C++, where you could set up the platform-specific UI elements in Objective-C and then do all your backend and rendering logic in C++. Again, Philip does this in his book, and you can see in his source code example applications how he structures this. The people at Imagination Technologies have also set up some platform-agnostic scaffolding in their PowerVR SDK, which some people have used for quickly getting their 3-D rendering up and running on mobile devices. Also in that SDK are some great documents about moving from OpenGL to OpenGL ES, as well as performing various effects on these GPUs.
I have heard of some people getting slightly better performance for small vertex sets by performing transformations on-CPU (on iOS this can be done using the Accelerate framework), but I'd imagine that vertex shaders would be much faster for larger geometry. The PowerVR GPUs that I've worked with in mobile devices are much more powerful than you'd think, particularly the new one that ships in the iPad 2.
You'll need to use the Xcode IDE, with either its GCC or LLVM compiler to target iOS devices, but I believe Android has a few more options in that regard.
In short:
Yes, of course. Why not?
Yes, I suppose. What else? DirectX definitely not.
Yes, I suppose. But depends on what else you want to do.
No, at least not just because of SIMD, as I suppose it is not much supported on mobile platforms, at least the SIMD instructions XNA is optimized for.
Yes, why not? I think the i...s mostly use Objective-C, but there should be compilers for C++, too. Just ask google, as I also don't have any mobile experience.

How to tell whether an OpenGL context is hardware accelerated?

I know that if the openGl implementation does not find a suitable driver it happily falls back and render everything in software mode. It's good for graphics applications but it is not acceptable for computer games.
I know many users using Windows XP and if the user does not install the video card driver for his GPU then the OpenGL won't be hardware accelerated (while DirectX is or if not it will throw errors).
Is there a better (and possibly cross platform) way to determine if OpenGL uses the hardware acceleration than measuring the FPS and if it's too low notify the user?
I know that games like Quake3 can find it out somehow...
It seems that there is no direct way to query OpenGL for this but there are some methods that may help you to determine if hardware acceleration is present. See here for Windows ideas. In a UNIX environment glxinfo | grep "direct rendering" should work.
See also glGetString and 5.040 How do I know my program is using hardware acceleration on a Wintel card?
This previous answer suggests that checking to see if the user only has OpenGL 1.1 may be sufficient.
How to write an installer that checks for openGL support?