Implement OpenCL on OpenGL - opengl

I have seen OpenCL is widely supported by CPU implementation as well as some GPU implementations.
For the cases there is a GPU, but no GPU implementation available, would it be feasable to implement OpenCL using OpenGL?
Maybe most operations would map quite well to GLSL fragment shader or even compute shaders.
If this is feasable, where to begin? Is there any 'minimal' CPU OpenCL implementation, that one could start off?

For the cases there is a GPU, but no GPU implementation available, would it be feasable to implement OpenCL using OpenGL?
Possible: Yes, certainly. Every Turing complete machine can emulate any other Turing complete machine.
Feasible: OpenGL implementations' GLSL compilers are already primadonnas, each implementation's compiler behaving a little different. OpenGL itself has tons of heuristics in it to select the codepath. Shoehorning a makeshift OpenCL on top of OpenGL + GLSL will be an exercise in lots of pain.
Required: Absolutely not: For every GPU which has the required capabilities to actually support OpenCL the drivers do support OpenCL anyway, so this is a moot non-issue.

OpenCL has certain very explicit requirements on the precision of floating-point operations. GLSL's requirements are far more lax. So even if you implemented the OpenCL API on top of OpenGL, you would never be able to get the same behavior.
Oh and in case you start thinking otherwise, Vulkan is no better in this regard.

Related

Strict nVidia OpenGL support (OpenGL 3.2)

While AMD is following the OpenGL specification very strict, nVidia often works even when the specification is not followed. One example is that nVidia supports element incides (used in glDrawElements) on the CPU memory, whereas AMD only supports element indices from a element array buffer.
My question is: Is there a way to enforce strict OpenGL behaviour using a nVidia driver? Currently I'm interested in a solution for a Windows/OpenGL 3.2/FreeGlut/GLEW setup.
Edit: If it is not possible to enforce strict behaviour on the driver itself - is there some OpenGL proxy that guarantees strict behaviour (such as GLIntercept)
No vendor enforces the specification strictly. Be it AMD, nVidia, Intel, PowerVR, ... they all have their idiosyncrasies and you have to learn to live with them, sadly. That is one of the annoying things about having each vendor implement their own GLSL compiler, as opposed to Microsoft implementing the one and only HLSL compiler in D3D.
The ANGLE project tries to mitigate this to a certain extent by providing a single shader validator shared across many of the major web browsers, but it is an uphill battle and this only applies to WebGL for the most part. You will always have implementation differences when every vendor implements the entire API themselves.
Now that Khronos group has seriously taken on the task of establishing a set of conformance tests for desktop OpenGL like they have for WebGL / OpenGL ES, things might start to get a little bit better. But forcing a driver to operate in a strict conformance mode is not really a standard thing - there may be #pragmas and such that hint the compiler to behave more strictly, but these are all vendor specific.
By the way, I realize this question has nothing to do with GLSL per-se, but it was the best example I could give.
Unfortunately, the only way you can be sure that your OpenGL code will work on your target hardware is to test it. In theory simply writing standard compliant code should work everywhere, but sadly this isn't always the case.

Fixing GLSL shaders for Nvidia and AMD

I am having problems getting my GLSL shaders to work on both AMD and Nvidia hardware.
I am not looking for help fixing a particular shader, but how to generally avoid getting these problems. Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it?
I know, in the end, testing is the only way to be sure, but during development I would like to at least avoid the obvious problems.
Everyone using GLSL must have these problems, so why can't I find a good way to fix them?
Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it?
No. If you are going to be serious about developing applications, testing on a variety of hardware is the only reliable way to go about it. And if you're not going to be serious, then who cares.
Generally speaking, the easiest way to handle this for a small team is to avoid the problem altogether. Most driver incompatibilities come from attempting to do something unorthodox: passing arrays as output/input varying variables, passing matrices as attributes, using more recent driver features, etc. So... don't do that. Use only solid, safe stuff that's been around in GLSL and has been almost certainly used in real-world OpenGL applications.
using NVidias's NVEmulate and AMD's GPU ShaderAnalyzer could be an option.
AMD's GPU ShaderAnalyzer is a stand-alone GLSL/HLSL compiler. NVidias's NVEmulate is a tool to emulate the features of different (better) NVidia graphic cards in software. So if you have an NVidia card, you can simply run your program to test it (possibly emulating another NVidia card with the NVEmulate) and use the ShaderAnalyser to see if your shader compile on AMD cards.
If your shader runs on AMD, it will most probably run on NVidia. You can still test this with cgc (NVidias stand-alone Cg compiller, part of the Cg Toolkit) it compiles GLSL and Cg code to binary or cross-compile it to HLSL. This is the compiler which NVidia drivers use for GLSL anyway.
The bonus is that you can also see the binary/assembly code of your shader, which is very helpful for low-level optimizations.
What no tool can tell you is, if the shader works (as expected) on different hardware.. I recently found out that some new AMD driver don't handle default uniform values properly... without a warning or error message... but this is a different story.
So at some point you have to test your code on the target hardware.

What Are The Changes To OpenGL From 1.x, 2.x, 3.x And 4.x?

What have changed that makes OpenGL different? I heard of people not liking OpenGL since OpenGL 3.x, but what happend? I want to learn OpenGL but I don't know which one. I want great graphics with the newer versions, but what's so bad?
Generally, every major version of OpenGL is roughly equivalent to a hardware generation. Which means that generally if you can run OpenGL 3.0 card, you can also run OpenGL 3.3 (if you have a sufficiently new driver).
OpenGL 2.x is the DX9-capable generation of hardware, OpenGL 3.x is the DX10, and OpenGL 4.x the DX11 generation of hardware. There is no 100% exact overlap, but this is the general thing.
OpenGL 1.x revolves around immediate mode, which is conceptually very easy to use, and a strictly fixed function pipeline. The entry barrier is very low, because there is hardly anything you have to learn, and hardly anything you can do wrong.
The downside is that you have considerably more library calls, and CPU-GPU parallelism is not optimal in this model. This does not matter so much on old hardware, but becomes more and more important to get the best performance out of newer hardware.
Beginning with OpenGL 1.5, and gradually more and more in 2.x, there is slight paradigm shift away from immediate mode towards retained mode, i.e. using buffer objects, and a somewhat programmable pipeline. Vertex and fragment shaders are available, with varying feature sets and programmability.
Much of the functionality in these versions was implemented via (often vendor-specific) extensions, and sometimes only half-way or in several distinct steps, and not few features had non-obvious restrictions or pitfalls for the casual programmer (e.g. register combiners, lack of branching, limits on instructions and dependent texture fetches, vtf support supporting zero fetches).
With OpenGL 3.0, fixed function was deprecated but still supported as a backwards-compatibility feature. Almost all of "modern OpenGL" is implemented as core functionality as of OpenGL 3.x, with clear requirements and guarantees, and with an (almost) fully programmable pipeline. The programming model is based entirely on using retained mode and shaders. Geometry shaders are available in addition to vertex and fragment shaders.
Version 3 has received a lot of negative critique, but in my opinion this is not entirely fair. The birth process was admittedly a PR fiasco, but what came out is not all bad. Compared with previous versions, OpenGL 3.x is bliss.
OpenGL 4.x has an additional tesselation shader stage which requires hardware features not present in OpenGL 3.x compatible hardware (although I daresay that's rather a marketing reason, not a technical one). There is support for a new texture compression format that older hardware cannot handle as well.
Lastly, OpenGL 4.x introduces some API improvements that are irrespective of the underlying hardware. Those are also available under OpenGL 3.x as 100% identical core extensions.
All in all, my recommendation for everyone beginning to learn OpenGL is to start with version 3.3 right away (or 3.2 if you use Apple).
OpenGL 3.x compatible hardware is nearly omni-present nowadays. There is no sane reason to assume anything older, and you save yourself a lot of pain. From an economic point of view, it does not make sense to support anything older. Entry level GL4 cards are currently at around $30. Therefore, someone who cannot afford a GL3 card will not be able to pay for your software either (it is twice as much work to maintain 2 code paths, though).
Also, you will eventually have no other choice but to use modern OpenGL, so if you start with 1.x/2.x you will have to unlearn and learn anew later.
On the other hand, diving right into version 4.x is possible, but I advise against it for the time being. Whatever is not dependent on hardware in the API is also available in 3.x, and tesselation (or compute shader) is something that is usually not strictly necessary at once, and something you can always add on later.
For an exact list of changes I suggest you download the specification documents of the latest of each OpenGL major version. At the end of each of these there are several appendices documenting the changes between versions in detail.
The many laptops with Intel integrated graphics designed before approx a year ago do not do OpenGL 3. That includes some expensive business machines, e.g., $1600 Thinkpad x201, still for sale on Amazon as of today (4/3/13) (although Lenovo has stopped making them),
OpenGL 3.1 removed the "fixed function pipeline". That means that writing vertex and fragment shaders is no longer optional: If you want to display anything, you must write them. This makes it harder for the beginner to write "hello world" in OpenGL.
The OpenGL Superbible Rev 5 does a good job of teaching you to use modern OpenGL without falling back on the fixed function pipeline. That's where I would start if I were learning OpenGL from scratch.
Their rev 4 still covers the fixed function pipeline if you want to start with a more "historical" approach.

Is the opengl code running on GPU?

When there is a program,which consists of normal c++ code and opengl code.
So,both c++ and opengl are compiled and linked to ELF.
And,seemingly they both run on CPU.
Why opengl code has more power to paint on screen than c++ code ?
Why opengl code has more power to paint on screen than c++ code?
Because OpenGL merely sends drawing commands to the GPU, which is then doing the bulk work. Note that there are also OpenGL implementations that are not GPU accelerated and therefore not faster than other software rasterizers running on the CPU.
Unless you're talking about GLSL, there is no distinction between "C++ code" and "OpenGL code". It's all just C or C++, depending on what you're building. OpenGL is an API, a library that contains functions that do stuff.
Your code calls OpenGL functions, which are functionally no different from any other C++ function you might call. Functions in C++ do something, based on how they're implemented.
OpenGL functions tell the GPU what to do, using GPU-specific constructs. That's what OpenGL is for: to abstract away the specifics of hardware, so that you can write code that is not hardware-dependent. Your code that calls OpenGL functions should work on any OpenGL implementation that supports your minimum GL version (and extensions, if you're using those).
Similarly, std::fstream abstracts away differences between, say, Windows and Linux file access commands. Same API for the user, but it has different implementations on different OS's.

GPU Usage in a non-GLSL OpenGL Application

I read from the OpenGL Wiki that current Modern GPUs are only programmable using shaders.
Modern GPUs no longer support fixed
function. Everything is done with
shaders. In order to preserve
compatibility, the GL driver generates
a shader that simulates the fixed
function. It is recommended that all
new modern programs use shaders. New
users need not learn fixed function
related operations of GL such as
glLight, glMaterial, glTexEnv and many
others.
Is that mean that if we are not implementing shader/GLSL in OpenGL, we actually don't access the GPU at all and only do the computation using the CPU?
No. It means that all fixed function stuff is automatically converted to shaders by the drivers.
Everything is done with shaders. In
order to preserve compatibility, the
GL driver generates a shader that
simulates the fixed function.
These shaders still run on the GPU (as all shaders do). They are just automatically made for you.