Shaders in place of GPGPU - opengl

I want to experiment with some GPGPU in first place. I could have chosen between 5 choices out there: OpenCL, CUDA, FireStream, Close to Metal, DirectCompute. Well not really after filtering them for my needs none suits :) I am using Radeon 3870HD, so CUDA is out, I want crossplatform DirectCompute out, Close to Metal evolved to FireStream (equivalent of CUDA for AMD) and FS is now "deprecated" for good of openCL. And guess what? openCL is avalible from radeon 4xxx series.. So I don't want to learn something that's not going to be supported and i don't have HW for new one.
So until I get new piece, I thought that shaders can really do similiar things, it's just much harder to get results back, and slower also. Anyway I don't plan to do research with this so for me it could be good enough. Searching for something like that in google is job for garbage man (no offense) so what are possibilities of rendering in other place than framebuffer used for displaying? Can one create textures or what other buffers would be suited best for this? In case of texture I would like some info how to access it, with buffers it shouldn't be much of a problem..
Almost forgot, I'm using openGL 3.1 and GLSL 1.5
Thanks

It's completely possible, GPGPU was done that way before CUDA appeared. Here is a tutorial from that time:
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
To render to anything other than a framebuffer, you can use Transform Feeback in OpenGL 3.0 to render to a VBO.

Related

Replicating Cathode retro terminal effect?

I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.

(rendering particles) Should I learn shader or OpenCL?

I am trying to run 100000 and more particles.
I've been watching many tutorials and other examples that demonstrate the power of shaders and OpenCL.
In one example that I watched, particle's position was calculated based on the position of your mouse pointer(physical device that you hold with one hand and cursor on the screen).
The position of each particle was stored as RGB. R being x, G y, and B, z. And passed to pixel shader.And then each color pixel was drawn as position of particle afterward.
However I felt absurd towards this approach.
Isn't this approach or coding style rather to be avoided?
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Isn't this approach or coding style rather to be avoided?
Why?
The entire point of shaders is for you to be able to do what you want, to more effectively express what you want to do, and to allow yourself greater control over the hardware.
You should never, ever be afraid of re-purposing something for a different functionality. Textures do not store colors; they store data, which can be color, but it can also be other stuff. The sooner you stop thinking of textures as pictures, the better off you will be as a graphics programmer.
The GPU and API exist to be used. Use it as you see fit; do not allow how you think the API should be used to limit you.
Shoudn't I learn how to use OpenCL and use the power of GPU's multithreading to directly state and pass my intended code?
Yesterday, I would have said "yes". However, today this was released: OpenGL compute shaders.
The fact that the OpenGL ARB and Khronos created this shader type and so forth is a tacit admission that OpenCL/OpenGL interop is not the most efficient way to generate data for rendering purposes. After all, if it was, there would be no need for OpenGL to have generalized compute functionality. There were 3 versions of GL 4.x that didn't provide this. The fact that it's here now is basically the ARB saying, "Yeah, OK, we need this."
If the ARB, staffed by many people who make the hardware, think that CL/GL interop is not the fastest way to go, then it's pretty clear that you should use compute shaders.
Of course, if you're trying to do something right now, that won't help; only NVIDIA has compute shader support. And even that's only in beta drivers. It will take many months before AMD gets support for them, and many more before that support becomes solid and stable enough to use.
Even so, you don't need compute shaders to generate data. People have used transform feedback and geometry shaders to do LOD and frustum culling for instanced rendering. Do not be afraid to think outside of the "OpenGL draws stuff" box.
To simulate particles in OpenCL, you should try out "Yet Another Shader Editor" / http://yase.chnk.us/ - it takes away all the tricky parts and lets you get down to the meat of coding the particle control algorithms. IN YOUR BROWSER. Nothing to download, no accounts to create, just alter whatever examples you find. It's a blast.
https://lotsacode.wordpress.com/2013/04/16/fun-with-particles-yet-another-shader-editor/
I'm not affiliated with yase in any way.

Is Cairo acelerated on Opengl backend?

By this I mean, does Cairo draw lines, shapes and everything using opengl acelerated primitives or no? and if not, a library that does this?
The OpenGL backend certainly accelerates some functions. But there are many it can't accelerate. The fact that it's written against GL 2.1 (and thus can't use more advanced features of 3.x or 4.x hardware) means that there is a lot that it simply cannot accelerate.
If you are willing to limit yourself to NVIDIA hardware, NVIDIA just came out with the NV_path_rendering extension, which provides a lot of the 2D functionality you would find with Cairo. Indeed, it's possible that you could write a Cairo backend for it. The path rendering extension is only available on GeForce 8xxx hardware and above.
It's nifty in that it's focused on the vertex pipeline. It doesn't do things like gradients or colors or whatever. That's good, because it still allows you the use of a fragment shader. Which means you get to do pretty much whatever you want ;)
Cairo is designed to have a flexible backend for rendering. It can use OpenGL for rendering, though support is still listed as "experimental" at this point. For details, see using cairo with OpenGL.
It can also output to the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG, and more.

What's the best tool you can use to learn to program shaders?

I've recently been doing some DirectX 10 work and I'm looking to move to DirectX 11 and Shader Model 5.0. I've written a few very simple shaders in the past and I'm looking to broaden my horizons and venture into more complex shaders. My question is sort of multi-fold:
What is the best tool out there to program shaders with? I've only used visual studio and SOME FX composer - read: enough to open it up and look at it.
Does the brand of gfx card effect what type of shaders you can program?
The reason I ask is that it seems like Nvidia has way better tools and ATI seems to have cancelled RenderMonkey. I don't seem to see any replacement for it? Am I wrong?
Sort of the same question as #1, but can you use cross vendor tools if you just intend to write DirectX shaders and not vendor specific?
If you need to go vendor specific, does Nvidia generally have better tools? I'd really like to ATI right now, as they seem to be the best bang for the buck (and I have an AMD board) - but am hesitant becasue I mostly use my gfx cards for programming.
What is your ultimate goal? If you'd just like to know a little more about shading, or if you are an artist or technical artist, then 1. FX Composer and RenderMonkey are pretty good.
If you are a programmer and you'd like to make graphics engines, then 1. you should use a text editor, because the shader is just one small part of a graphics engine. Past a certain low level of sophistication, shader constants and certain textures need to be created on-the-fly in a language like C++.
two. The brand of graphics card doesn't affect what kind of shaders you can program at all nowadays.
three. Cross-vendor tools are fine.
Though you didn't address this issue in your question, I feel I should mention: not only are shaders today just a small part of a graphics engine, but their role will diminish soon as focus shifts to deferring work until "post-processing" using "compute." A shader will soon typically output abstract terms like albedo and normal, not color. Its relevance to art will decline.

Using shader for calculations

Is it possible to use shader for calculating some values and then return them back for further use?
For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?
This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.
Thanks
Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).
What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).
There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.
They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.
I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html
This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.
These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.
Now you can use shader buffer objects in OpenGL to write values in shaders that can be read in host.
My best guess would be to send you to BehaveRT which is a library created to harness GPUs for behavorial models. I think that if you can formulate your modifications in the library, you could benefit from its abstraction
About the data passing back and forth between your cpu and gpu, i'll let you browse the documentation, i'm not sure about it