Dynamic shader in OpenGL - opengl

CUDA 5 and OpenCL 2 introduce dynamic Parallelism (kernels launched by another kernel via a device API, not by the host API).
Is there an equivalent to this in OpenGL? Is it possible to simulate them with feedback loops? (I think not) They don't miss in OpenGL (maybe in GL 4.3 compute shader) (shadow, texture, etc).

According to this page, it seems that compute shaders in OpenGL don't support dynamic parallelism. You can only launch them with glDispatchCompute​() or glDispatchComputeIndirect​().
It is less possible for other shaders to have such support, because they are within the OpenGL processing stages.

Related

Is T&L technology obsolete?

I search some information about how GPU works. From different sources i found out that T&L (Transform and Lighting) technology used for hardware acceleration. For example, it calculates polygon lighting. But as I know, today the developers are using programmable graphic pipeline, and create lighting by shaders.
So, what is T&L today used for?
The classic 'Transform & Lighting' fixed-function hardware along with the "Texture blend cascade" fixed-function hardware is generally considered obsolete. Instead, the "T&L" phase has been replaced with Vertex Shaders, and the "Texture blend cascade" has been replace with Pixel Shaders.
For older legacy APIs that have a 'fixed-function' mode (Direct3D 9, OpenGL 1.x), most modern cards actually emulate the original behavior with programmable shaders.
There's an example for Direct3D 11 that emulates most (but not all) of the classic Direct3D 9 fixed-function modes if want to take a look at it on GitHub.
Generally speaking, you are better off using a set of shaders that implements the features you actually use rather than a bunch of stuff you don't.

Are Shaders used in the latest OpenGL very early?

When i look to the 4th Edition of the Book "OpenGL SuperBible" it starts with Points / Lines drawing Polygons and later on Shaders are discussed. In the 6th Edition of the book, it starts directly with Shaders as the very first example. I didn't use OpenGL for a long time, but is it the way to start with Shaders?
Why is there the shift, is this because of going from fixed pipeline to Shaders?
To a limited extent it depends on exactly which branch of OpenGL you're talking about. OpenGL ES 2.0 has no path to the screen other than shaders — there's no matrix stack, no option to draw without shaders and none of the fixed-pipeline bonded built-in variables. WebGL is based on OpenGL ES 2.0 so it inherits all of that behaviour.
As per derhass' comment, all of the fixed stuff is deprecated in modern desktop GL and you can expect it to vanish over time. The quickest thing to check is probably the OpenGL 4.4 quick reference card. If the functionality you want isn't on there, it's not in the latest OpenGL.
As per your comment, Kronos defines OpenGL to be:
the only cross-platform graphics API that enables developers of
software for PC, workstation, and supercomputing hardware to create
high- performance, visually-compelling graphics software applications,
in markets such as CAD, content creation, energy, entertainment, game
development, manufacturing, medical, and virtual reality.
It more or less just exposes the hardware. The hardware can't do anything without shaders. Nobody in the industry wants to be maintaining shaders that emulate the old fixed functionality forever.
"About the only rendering you can do with OpenGL without shaders is clearing a window, which should give you a feel for how important they are when using OpenGL." - From OpenGL official guide
After OpenGL 3.1, the fixed-function pipeline was removed and shaders became mandatory.
So the SuperBible or the OpenGL Redbook begin by describing the new Programmable Pipeline early in discussions. Then they tell how to write and use a vertex and fragment shading program.
For your shader objects you now have to:
Create the shader (glCreateShader, glShaderSource)
Compile the shader source into an object (glCompileShader)
Verify the shader (glGetShaderInfoLog)
Then you link the shader object into your shader program:
Create shader program (glCreateProgram)
Attach the shader objects (glAttachShader)
Link the shader program (glLinkProgram)
Verify (glGetProgramInfoLog)
Use the shader (glUseProgram)
There is more to do now before you can render than in the previous fixed function pipeline. No doubt the programmable pipeline is more powerful, but it does make it more difficult just to begin rendering. And the shaders are now a core concept to learn.

Fastest way to modify openGL texture with openCL per pixel

Using OpenGL 4.4 and OpenCL 2.0, lets say i just want to modify specific pixels of a texture per frame.
Which is the optimal way to achieve this?
Which object should i share?
Will i be able to modify only limited number of pixels?
I want GPU only operations.
First off, there are no OpenCL 2.0 drivers yet; the specification only recently got finalized and implementations probably won't happen until 2014.
Likewise, many OpenGL implementations aren't at 4.4 yet.
However, you can still do what you want with OpenCL 1.2 (or 1.1 since NVIDIA is behind the industry in OpenCL support) and current OpenGL implementations.
Look for OpenCL / OpenGL interop examples, but basically:
Create OpenCL context from OpenGL context
Create OpenCL image from OpenGL texture
After rendering your OpenGL into the texture, acquire the image for OpenCL, run an OpenCL kernel that only update the specific pixel you want to update, and release it back to OpenGL
Draw the texture to the screen
Often OpenCL kernels are 2D and address each pixel, but you can run a 1D kernel where each work item updates a single pixel based on some algorithm. Just make sure not to write the same pixel from more than one work item or you'll have a race condition.

"Drawing of data generated by OpenGL or external APIs such as OpenCL, without CPU intervention."

I noticed that in the new features listed for OpenGL 4.0 the following is included:
Drawing of data generated by OpenGL or external APIs such as OpenCL,
without CPU intervention.
What functionality exactly is this referring to?
It's talking about ARB_draw_indirect. That functionality, core in 4.0, allows the GL implementation to read the drawing parameters directly from the buffer object. So the parameters you would pass to glDrawArrays or glDrawElements come from the buffer, not from your Draw call.
This way, OpenCL or other GPGPU code can just write that struct into the buffer. And therefore, they can determine how many vertices to draw.
AMD has a pretty nifty variation of this that allows for multi-draw functionality.

OpenGL: "Fragment Shader not supported by HW" on old ATI card

In our OpenGL game we've got a shader link failure on an ATI Radeon x800 card. glGetProgramInfoLog reports:
Fragment shader(s) failed to link, vertex shader(s) linked.
Fragment Shader not supported by HW
Some googling suggests that we may be hitting an ALU instruction limit, due to a very long fragment shader. Any way to verify that?
I wasn't able to find detailed specs for the x800, nor any way to query the instruction limit at runtime. And even if I was able to query it, how do I determine the number of instructions of my shader?
There are several limits your may hit:
maximum shader length
maximum number of texture indirections (this is the limit most easily crossed)
using unsupported features
Technically the X800 is a shader model 2 GPU, which as about what GLSL 1.20 provides. When I started shader programming with a Radeon 9800, and the X800 is just a upscaled 9800 technically, I quickly abandoned the idea of doing it with GLSL. It was just too limited. And like so often when computer has only limited resources and capabilites, the way out was using assembly. In that case I mean the assembly provided by ARB_fragment_program extension.
GLview is a great tool to easily view all the limitations and supported GL extensions of a GPU/driver combination. If I recall correctly, I previously used AMD GPU ShaderAnalyzer which allows you to see the assembly compiled version of GLSL shaders. NVidia offers the same functionality with the nvemulate tool.
The x800 is very limited in shader power compared to current GPUs. You would probably have to cut back on your shader complexity anyway for this lower-end GPU to achieve proper performance. If you have your GLSL version running, simply choosing different fragment shaders for the X800 will probably be the most sensible approach.