GLSL How to retrieve vertex position after a shader process it? - glsl

I wrote a program that simulates soft bodies using springs. It looks nice but the problem is it consumes a lot of CPU time. So I can not run it on my laptop or any not high end PC.
I thought It would be a good idea to write a vertex shader and move the logic to the GPU. I've read some tutorials and made a toon shader so I thought (wrong) I was ready to go.
The big problem I have is that I need to know the old position of a vertex to calculate the new one. I don't know how could I retrieve a vertex position so I could send it back to the shaders each frame?
I'm not really sure is this even possible to do and maybe I'm trying to do something that shaders are never meant to do. I am still researching but I thought I could ask an see if maybe someone can help.

You can use the transform feedback mechanism if your hardware supports OpenGL 3.0 or above. There are also other techniques for getting the vertex position back, like carefully arranging your rendering so that you're writing a triangle (or point primitive) to each separate pixel on the screen. This is fairly difficult, and you need to render to a floating-point buffer, which requires FBO support.

Related

Data from shader to main program

I'm working on a small program in OpenGL and realized I needed to retrieve some data from the geometry shader to the main program so I could handle mouse events.
Not much, just some specific square coordinates that are calculated in the geometry shader.
How should I do this? Should I use a small FBO or should I make all the calculations in the main program and then send them to the geometry shader?
Generally speaking, you should do as much computation as possible in the host program.
If you want to read back data from a shader, Google is your friend. Outputting to an FBO is possible, although you'll also need a nontrivial fragment shader. The best option is often to use an SSBO, although image load-store or transform feedback may be more appropriate depending on what you're trying to do.
The easiest way to do this, is to colorcode you values, you need to send to host and use glGetPixels method.
You need to render to a seperate framebuffer, to hide the calculation from the screen.
If you want to implement Hittesting on objects of your scene and are not GPU bound this is the way to go.

Reading current framebuffer

Is there a way to read fragment from the framebuffer currently rendered?
So, I'm looking for a way to read color information from the fragment that's on the place that current fragment will probably overwrite. So, exact position of the fragment that previously rendered.
I found gl_FragData and gl_LastFragData to be added with certain EXT_ extensions to shaders, but if they are what I need, could somebody explain how to use those?
I am looking either for a OpenGL or OpenGL ES 2.0 solution.
EDIT:
All the time I was searching for the solution that would allow me to have some kind of read&write "uniform" accessible from shaders. For anyone out there searching for similar thing, OpenGL version 4.3+ support image and buffer storage types. They do allow both reading and writing to them simultaneously, and in combination with compute shaders they proved to be very powerful tool.
Your question seems rather confused.
Part of your question (the first sentence) asks if you can read from the framebuffer in the fragment shader. The answer is, generally no. There is an OpenGL ES 2.0 extension that lets you do so, but it's only supported on some hardware. In desktop GL 4.2+, you can use arbitrary image load/store to get the same effect. But you can't render to that image anymore; you have to write your data using image storing functions.
gl_LastFragData is pretty simple: it's the color from the sample in the framebuffer that will be overwritten by this fragment shader. You can do with it what you wish, if it is available.
The second part of your question (the second paragraph) is a completely different question. There, you're asking about fragments that were potentially never written to the framebuffer. You can't read from a fragment shader; you can only read images. And if a fragment fails the depth test, then it's data was never rendered to an image. So you can't read it.
With most nVidia hardware you can use the GL_NV_texture_barrier extension to read from a texture that's currently bound to a framebuffer. But bear in mind that you won't be able to read data any more recent than produced in the previous draw call

Opengl/glsl shader animation and lighting issue

So lately i've took my first serious steps (or at least i think so) into opengl/glsl and shaders in general.
Ive managed to construct and render VBOs, create and compile shaders and also mess with them in some sort of way.
I'm using a vertex shader to fix my opengl view (correct the aspect ratio) and also perform animation. This is achieved with varius matrix manipulations.
One would ask why am i using vertex shaders for animation, but reading articles around the globe i got the impression it's best to maintain static VBOs rather updating them constantly. Some sort of GPU>CPU battle.
Now i may be wrong about it that's why im reaching here for aid on the matter. My view on it is that in the future i might make a game which (for instance) will have a lot of coins for a player to grab and i would like them to be staticly stored at the GPU side. And then use the shader for rotating them.
Moving on.. "Let there be light".
I've also managed to use my normals in the vertex shader to reproduce lighting. It all worked fine with the exception that light rotates with my cube (currently im using a cube as a test dummy). Now, i know what's wrong here. It's my vertex shader transforming absolutely everything (even my light source i guess). And i can think of a way or two on how to solve this problem. One would be to apply reverse-negative transformation forces on my light source so i can keep it static as everything else rotates.
And here's where everything blurs. Im reaching stackoverflow for guidance on how to move forward. I am trying to think bigger in a way-sense : what if, in the future, i'll have plenty objects i'd like to perform basic animations for (such as rotation, scaling, translations). Would that require me to have different shaders or even a packed one with every function in it. And how would i even use this. Would i pass different values before every object rending inside the same shader?
Right now, to be honest, i want to handle the lighting issue. But i have a feeling that the way im about to approach this will set my general approach in shading animations in general. One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering. I have my concerns on wether that's efficient enough, but i definately like the idea.
One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering.
Which question/answer was this? Because you normally should avoid switching shaders where possible. Maybe the person meant uniforms, which are parameters to shaders, but can be changed for cheap.
Also your question is far too broad and also not very concise (all that backstory hides the actual issue). I strongly suggest you split up your doubts into a number of small questions which can be answered in separate.

Is it possible to reuse glsl vertex shader output later?

I have a huge mesh(100k triangles) that needs to be drawn a few times and blend together every frame. Is it possible to reuse the vertex shader output of the first pass of mesh, and skip the vertex stage on later passes? I am hoping to save some cost on the vertex pipeline and rasterization.
Targeted OpenGL 3.0, can use features like transform feedback.
I'll answer your basic question first, then answer your real question.
Yes, you can store the output of vertex transformation for later use. This is called Transform Feedback. It requires OpenGL 3.x-class hardware or better (aka: DX10-hardware).
The way it works is in two stages. First, you have to set your program up to have feedback-based varyings. You do this with glTransformFeedbackVaryings. This must be done before linking the program, in a similar way to things like glBindAttribLocation.
Once that's done, you need to bind buffers (given how you set up your transform feedback varyings) to GL_TRANSFORM_FEEDBACK_BUFFER with glBindBufferRange, thus setting up which buffers the data are written into. Then you start your feedback operation with glBeginTransformFeedback and proceed as normal. You can use a primitive query object to get the number of primitives written (so that you can draw it later with glDrawArrays), or if you have 4.x-class hardware (or AMD 3.x hardware, all of which supports ARB_transform_feedback2), you can render without querying the number of primitives. That would save time.
Now for your actual question: it's probably not going to help buy you any real performance.
You're drawing terrain. And terrain doesn't really get any transformation. Typically you have a matrix multiplication or two, possibly with normals (though if you're rendering for shadow maps, you don't even have that). That's it.
Odds are very good that if you shove 100,000 vertices down the GPU with such a simple shader, you've probably saturated the GPU's ability to render them all. You'll likely bottleneck on primitive assembly/setup, and that's not getting any faster.
So you're probably not going to get much out of this. Feedback is generally used for either generating triangle data for later use (effectively pseudo-compute shaders), or for preserving the results from complex transformations like matrix palette skinning with dual-quaternions and so forth. A simple matrix multiply-and-go will barely be a blip on the radar.
You can try it if you like. But odds are you won't have any problems. Generally, the best solution is to employ some form of deferred rendering, so that you only have to render an object once + X for every shadow it casts (where X is determined by the shadow mapping algorithm). And since shadow maps require different transforms, you wouldn't gain anything from feedback anyway.

OpenGL 3.1-4.1 new and deprecated features

I've been working with OpenGL for about a year now, and have learned a lot of stuff. Unfortunatly the way I learned it was the old pre 3.x way, meaning immediate mode, default shaders, matrix stacks, etc. I more or less have an idea of what has changed from then to now by looking at the OpenGL specs, however I don't totally understand some of the new ways to do things.
From my understanding they got rid of matrix stacks, meaning you have to keep track of your own transformation matrices, which doesn't seem too complicated. They also got rid of immediate mode, meaning you now need to use VBOs or VAOs (never know which one, maybe both..) to send the pixel/normal/texture,etc. information to the shader program. I don't really get the way these objects works, I think you need to put all the info into them, and provide an ofset of some sort to show the separators between pixel,normal and texture coordinates. Could someone briefly explain how this actually works (or send me a link which explains it)? I tried wikipedia and googling it, but found myself still not quite understanding them.
Another point I would like to know more about are shaders, as I've never used them. I'm not going to ask how to code them or anything, just what needs to go in there and what opengl still does for you. More specifically, what would you need to do in the shaders to get a basic rendering program? I know you need to do all the ligthing calculations and use your matrices to calculate the real vertex position. But does opengl still take care of backface culling, line clipping, polygon filling and other lower level issues, or do you have to code them yourslef into the shaders (or don't they even belong in the shaders)?
Since immediate mode is deprecated doing a "hello triangle" application is a bit more involved. There is a good tutorial on modern OpenGL here:
http://arcsynthesis.org/gltut/
You should read it thoroughly. Bear in mind that it doesn't use VAOs so you'll have to read about it somewhere else afterwards. VAOs don't change things much so you won't have to unlearn things from mentioned tutorial to use them.
And about your second question... Your vertex shader will be executed by OpenGL for every vertex. Your job is to calculate final position of the vertex and prepare data (like normals, light data...) to be sent to fragment shader, given the attributes of vertex and other data you send to shader (uniforms - you'll read about it in tutorial). Fragment shader will be executed per fragment and in fragment shader you are calculating the final color of each fragment.
You can see here:
http://www.opengl.org/sdk/docs/man4/
that things like, glPolygonMode and glCullFace are still there.