reference device -> develop DirectX11 on old hardware? - opengl

I just read about the "reference device' device type in direct3D.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb219625(v=vs.85).aspx
Does this mean I can develop and test (not performance, just visual result) a Direct3D 11 Application including fancy ShaderLevel5 stuff on any old hardware?
Is there an equivalent for OpenGL?

Yes effectively that IS what the reference driver does. Specifically it is there so that hardware rendering can be compared against it. If the hardware rendering does not equal the reference then this can indicate a driver bug (or an "optimisation").
To my knowledge there is no reference driver for OpenGL, unfortunately.

Does this mean I can develop and test (not performance, just visual result) a Direct3D 11 Application including fancy ShaderLevel5 stuff on any old hardware?
Yes. However, you should expect absolutely horrible performance. You could get about 1 frame per minute for complex pixel shaders (directx 9 reference device), and it can take even longer than that. Needless to say, same shader could work in real-time with hardware acceleration. Reference device weren't made for performance, and if I remember correctly, DirectX SDK states (somewhere) that main purpose of reference device was to allow developers to see if their scene looks the way it should and there are no unexpected driver bugs.
Another problem is that if you're running winxp, there will be no DirectX 10 or 11, reference device or not.
Is there an equivalent for OpenGL?
No. Closest thing is mesa3d, but it isn't OpenGL-certified. 1..2 years ago it could display very different picture when GLSL shaders were involved. It could also crash on shaders with flow control. I haven't used it since. however, when used without GLSL shaders, mesa3d was quite fast and comparable to OpenGL, and was significantly faster han reference device in DirectX 9.

Mesa3d has a software rasterizer. They recently added GL 3.0 support.

Related

what is different between DirectX API(and OpenGL API call) and normal system call on windows?

I have question about the method how windows os changes mode from user mode(app code) to kernel mode(like driver code) when Application use directX(or OpenGL) function or normal function(system call like printf).
what is different?
If there is different for performance, I want to know about this.
Eventually anything that outputs to a device will involve a user-mode/kernel-mode transition. When you do printf it happens when the C library calls WriteConsole although likely after some amount of buffering happens to minimize the number of transitions. Essentially the same thing happens with DirectX and OpenGL: do as much as you can in user-mode, do what you need in kernel-mode in a driver.
The specifics vary widely based on which version of the OS, which driver model (XPDM or WDDM) you are talking about, and which version of Direct3D or OpenGL you are asking about in particular.
From an application developer perspective, what you need to know is: try to avoid calling Draw a zillion times with just a few triangles in each if you want a good frame-rate.
For Direct3D 10 and 11, the best generic performance advice is found in Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games (Gamefest 2007)
For legacy Direct3D 9, see Accurately Profiling Direct3D API Calls (Direct3D 9)

Image Geometrical remapping on OpenGL ES

I have an algorithem that runs on PC and uses OpenCV remap. It is slow and I need to run it on an embedded system (for example a device such as this: http://www.hardkernel.com/main/products/prdt_info.php
It has OpenGL 3.0 and I am wondering if it is possible to write code in OpenGL shader to do the remapping (OpenCV remapping).
I have another device that has OpenGL 2.0, Can that device do shader programming?
where can I learn about shader programming in OpenGL?
I am using Linux.
Edit 1
The code runs on a PC and it takes around 1min, On am embedded system it takes around 2 hours!
I need to run it on an embedded system and for that reason I think to use OpenGL or OpenCL (the board has OpenCL 1.1 driver).
What is the best option for this? Can I use OpenGl 2 or OpenGL3?
A PC with a good graphic card (compatible with OpenCV) is much faster than a little embedded PC like Odroid or Banana Pi. I mean that computational_power/price or computational_power/energy is lower on these platforms.
If your algorithm is slow:
Are you sure your graphic driver is correctly configured to support OpenCV?
Try to improve your algorithm. On a current PC, is easy to get 1TFLOP with OpenCL, so if your program really require more, you should think about computer clouds and such. Check that you configured the appropriate buffers type, etc.
OpenGL 3 allow general shaders, but OpenGL 2 is very different and it must be much harder or impossible to make your algorithm compatible.
To learn OpenGL/GLSL, be very care because most page learn bad/old code.
I recommend you a good book, like: http://www.amazon.com/OpenGL-Shading-Language-Cookbook-Edition/dp/1782167021/ref=dp_ob_title_bk
EDIT 1
OpenGL 3+, or OpenGL ES 3+ have general purpose shaders and may be used for fast computing. So yes, you will get performance increased. But graphic cards on these platform are very small/slow (usually less than 10 cores). Do not expect to get the same 1min-result on this ODROID than on your PC with 500-2000 GPU cores.
OpenGL 2 has fixed pipeline and it is hard to use it for parallel computing.
If you really require to use an embedded platform, maybe you may use a cloud of them?

Direct3D 11.1's target-independent rasterization (TIR) equivalent in OpenGL (including extensions)

Target-independent rasterization (TIR) is a new hardware feature in DirectX 11.1, which Microsoft used to improve Direct2D in Windows 8. AMD claimed that TIR improved performance in 2D vector graphics by some 500%. And there was some "war of words" with Nvidia's because Kepler GPUs apparently don't support TIR (among other DirectX 11.1 features). The idea of TIR appears to have originated at Microsoft, because they have a patent application for it.
Now Direct2D is fine your OS is Windows, but is there some OpenGL (possibly vendor/AMD) extension that provides access to the same hardware/driver TIR thing? I think AMD is in a bit of a weird spot because there is no vendor-independent 2D vector graphics extension for OpenGL; only Nvidia is promoting NV_path_rendering for now and its architecture is rather different from Direct2D. So it's unclear where anything made by AMD to accelerate 2D vector graphics can plug (or show up) in OpenGL, unlike in the Direct2D+Direct3D world. I hope I my pessimism is going to be unraveled by a simple answer below.
I'm actually posting an update of sorts here because there's not enough room in comment-style posts for this. There seems to be a little confusion as to what TIR does, which is not simply "a framebuffer with no storage attached". This might be because I've only linked above to the mostly awful patentese (which is however the most detailed document I could find on TIR). The best high-level overview of TIR I found is the following snippet from Sinofsky's blog post:
to improve performance when rendering irregular geometry (e.g. geographical borders on a map), we use a new graphics hardware feature called Target Independent Rasterization, or TIR.
TIR enables Direct2D to spend fewer CPU cycles on tessellation, so it can give drawing instructions to the GPU more quickly and efficiently, without sacrificing visual quality. TIR is available in new GPU hardware designed for Windows 8 that supports DirectX 11.1.
Below is a chart showing the performance improvement for rendering anti-aliased geometry from a variety of SVG files on a DirectX 11.1 GPU supporting TIR: [chart snipped]
We worked closely with our graphics hardware partners [read AMD] to design TIR. Dramatic improvements were made possible because of that partnership. DirectX 11.1 hardware is already on the market today and we’re working with our partners to make sure more TIR-capable products will be broadly available.
It's this bit of hardware I'm asking to use from OpenGL. (Heck, I would settle even for invoking it from Mantle, because that also will be usable outside of Windows.)
The OpenGL equivalent of TIR is EXT_raster_multisample.
It's mentioned in the new features page for Nvidia's Maxwell architecture: https://developer.nvidia.com/content/maxwell-gm204-opengl-extensions.
I believe TIR is just a repurposing of a feature nvidia and AMD use for antialiasing.
Nvidia calls it coverage sample antialiasing and their gl extensions is GL_NV_framebuffer_multisample_coverage.
AMD calls it EQAA but they don't seem to have a gl extension.
Just to expand a bit on Nikita's answer, there's a more detailed Nvidia (2017) extension page that says:
(6) How do EXT_raster_multisample and NV_framebuffer_mixed_samples
interact? Why are there two extensions?
RESOLVED: The functionality in EXT_raster_multisample is equivalent to
"Target-Independent Rasterization" in Direct3D 11.1, and is expected to be
supportable today by other hardware vendors. It allows using multiple
raster samples with a single color sample, as long as depth and stencil
tests are disabled, with the number of raster samples controlled by a
piece of state.
NV_framebuffer_mixed_samples is an extension/enhancement of this feature
with a few key improvements:
- Multiple color samples are allowed, with the requirement that the number
of raster samples must be a multiple of the number of color samples.
- Depth and stencil buffers and tests are supported, with the requirement
that the number of raster/depth/stencil samples must all be equal for
any of the three that are in use.
- The addition of the coverage modulation feature, which allows the
multisample coverage information to accomplish blended antialiasing.
Using mixed samples does not require enabling RASTER_MULTISAMPLE_EXT; the
number of raster samples can be inferred from the depth/stencil
attachments. But if it is enabled, RASTER_SAMPLES_EXT must equal the
number of depth/stencil samples.

Is there a trick to use a opengl 3.x version program on a graphics card which supports opengl 2.x?

I have a onboard graphics card which supports opengl 2.2. Can I run a opengl (let's say 3.3 version) application on it by using some software etc?
OpenGL major versions somewhat refer to available hardware capabilities:
OpenGL-1: fixed function pipeline (DirectX 7 class HW)
OpenGL-2: programmable vertex and fragment shader support.(DirectX 9 class HW)
OpenGL-3: programmable geometry shader support (DirectX 10 class HW)
OpenGL-4: programmable tesselation shader support and a few other nice things (DirectX 11 class HW).
If your GPU supports OpenGL-2 only, then there is no way you could run a OpenGL-3 program, making use of all whistles and bells on it. Your best bet is a software rasterizing implementation.
A few years ago, when shders were something new, NVidia shipped their developer drivers with some higher functionality emulation software rasterizer, to kickstart shader development, so that there were actual applications to run on those new programmable GPUs.
Sure you can, you just have to disable those features. Whether this will work well depends greatly on the app.
The simplest method is to intercept all OpenGL calls, using some manner of DLL hooking, and filter them as necessary. When OGL3 features are used, return a "correct" answer (but don't do anything) or provide null for calls that aren't required.
If done properly, and the app isn't relying on the OGL3 features, this will run without those on your hardware.
If the app does require OGL3 stuff, results will be unreliable at best, and it may be unusable. It really depends on what exactly the app does and what it needs. Providing a null implementation of OGL3 will allow you to run it, but results are up in the air.
No. Well, not really. NVIDIA has some software emulation that might work, but other than that, no.
Your hardware simply can't do what GL 3.0+ asks of it.
also:
I have a onboard graphics card which supports opengl 2.2
There is no OpenGL 2.2. Perhaps you meant 2.1.

OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?

I am working on a gaming framework of sorts, and am a newcomer to OpenGL. Most books seem to not give a terribly clear answer to this question, and I want to develop on my desktop using OpenGL, but execute the code in an OpenGL ES 2.0 environment. My question is twofold then:
If I target my framework for OpenGL on the desktop, will it just run without modification in an OpenGL ES 2.0 environment?
If not, then is there a good emulator out there, PC or Mac; is there a script that I can run that will convert my OpenGL code into OpenGL ES code, or flag things that won't work?
It's been about three years since I was last doing any ES work, so I may be out of date or simply remembering some stuff incorrectly.
No, targeting OpenGL for desktop does not equal targeting OpenGL ES, because ES is a subset. ES does not implement immediate mode functions (glBegin()/glEnd(), glVertex*(), ...) Vertex arrays are the main way of sending stuff into the pipeline.
Additionally, it depends on what profile you are targetting: at least in the Lite profile, ES does not need to implement floating point functions. Instead you get fixed point functions; think 32-bit integers where first 16 bits mean digits before decimal point, and the following 16 bits mean digits after the decimal point.
In other words, even simple code might be unportable if it uses floats (you'd have to replace calls to gl*f() functions with calls to gl*x() functions.
See how you might solve this problem in Trolltech's example (specifically the qtwidget.cpp file; it's Qt example, but still...). You'll see they make this call:
q_glClearColor(f2vt(0.1f), f2vt(0.1f), f2vt(0.2f), f2vt(1.0f));
This is meant to replace call to glClearColorf(). Additionally, they use macro f2vt() - meaning float to vertex type - which automagically converts the argument from float to the correct data type.
While I was developing some small demos three years ago for a company, I've had success working with PowerVR's SDK. It's for Visual C++ under Windows; I haven't tried it under Linux (no need since I was working on company PC).
A small update to reflect my recent experiences with ES. (June 7th 2011)
Today's platforms probably don't use the Lite profile, so you probably don't have to worry about fixed-point decimals
When porting your desktop code for mobile (e.g. iOS), quite probably you'll have to do primarily these, and not much else:
replace glBegin()/glEnd() with vertex arrays
replace some calls to functions such as glClearColor() with calls such as glClearColorf()
rewrite your windowing and input system
if targeting OpenGL ES 2.0 to get shader functionality, you'll now have to completely replace fixed-function pipeline's built in behavior with shaders - at least the basic ones that reimplement fixed-function pipeline
Really important: unless your mobile system is not memory-constrained, you really want to look into using texture compression for your graphics chip; for example, on iOS devices, you'll be uploading PVRTC-compressed data to the chip
In OpenGL ES 2.0, which is what new gadgets use, you also have to provide your own vertex and fragment shaders because the old fixed function pipeline is gone. This means having to do any shading calculations etc. yourself, things which would be quite complex, but you can find existing implementations on GLSL tutorials.
Still, as GLES is a subset of desktop OpenGL, it is possible to run the same program on both platforms.
I know of two projects to provide GL translation between desktop and ES:
glshim: Substantial fixed pipeline to 1.x support, basic ES 2.x support.
Regal: Anything to ES 2.x.
From my understanding OpenGL ES is a subset of OpenGL. I think if you refrain from using immediate mode stuff, like glBegin() and glEnd() you should be alright. I haven't done much with OpenGL in the past couple of months, but when I was working with ES 1.0 as long as I didn't use glBegin/glEnd all the code I had learned from the standard OpenGL worked.
I know the iPhone simulator runs OpenGL ES code. I'm not sure about the Android one.
Here is Windows emulator.
Option 3) You could use a library like Qt to handle your OpenGL code using their built in wrapper functions. This gives you the option of using one code base (or minimally different code bases) for OpenGL and building for most any platform you want. You wouldn't need to port it for each different platform you wanted to support. Qt can even choose the OpenGL context based on the functions that you use.