Modifying a uniform variable inside a shader program to be used by another shader program - RenderMonkey - glsl

RenderMonkey is pretty ancient so I'm struggling to find a way to do this. To be clear, the only reason I'm using RenderMonkey is because it's for a University assignment.
RenderMonkey allows you to define uniform variables in the workspace that your shader programs can refer to. They also offer "variable semantics" which are predefined variables that change depending on the situation. For example, time elapsed. As time increases, the float "time" increases.
I have an elephant with a gun strapped to it's back (strange I know). The gun fires particles over time. The gun also rotates over time. In order for the bullet particles to fire in the correct direction, they also need to be rotated in the same way as the gun. Applying the same rotation calculation (the one that depends on time elapsed) used for the gun, to the bullets was my first thought but this causes the bullets to continue rotating after they've been fired which isn't ideal.
I was wondering if there's a way to have one uniform variable "gunAngle" that is edited in the gun shader (i.e rotated in the gun pass) and then that modified value is given to the bullet particle shader. If that's not possible in Rendermonkey, does anyone have any ideas for a workaround to this?

The "gunAngle" must be calculated on the CPU and set to the uniforms in each frame. If you want to use data in multiple shader programs you have to use a Shader Storage Buffer Object or Uniform Block.

Related

How to simulate mathimatically correct shadows of transparent objects?

I want to simulate shadows casted by complex and composite transparent objects.
This shadows must be mathematically correct for particular light source (at least for point light). I think this is true for any graphical library, is it?
Than, there must NOT be any refraction at all.
This image is not what I actually want to get of course.
Does OpenGL can do this? If it can not then what should I use instead?
UPD. So I need some path tracer. Is there some wich I could use programmatically: give it file of 3d-scene with objects and get the result of tracing?
This shadows must be mathematically correct
There's no such thing as a mathematically correct or wrong illumination. What you mean is physically correct.
Images like you want to create them rely on light propagation. The only way to properly simulate light propagation is to shoot virtual photons into a scene and follow their path. This is called path tracing.
Does OpenGL can do this?
OpenGL just draws points, lines and triangles… one at a time, without any concept of a scene or models.
Old, fixed function pipeline OpenGL had a simple Blinn illumination model built in, but this did just calculate a "light" value per vertex based on surface orientation (normal) and position relative to a light source.
Modern OpenGL does not even do that. Instead it relies on the programmer to provide programs that are executed for every vertex to decide where in the picture it goes and for every fragment (roughly a pixel) drawn to determine which color to give it.
In this programs, called shaders you can do just about anything. So if you want to implement a path tracer using OpenGL shaders, you can most certainly do this. But this path tracer will not interact with the points, lines and triangles you draw. Instead these will just serve to define the boundaries within which the shaders do their computations.
If it can not then what should I use instead?
It's not so much a question of if it is possible, but how easy it is to implement. In your case OpenGL is certainly not the right programming environment, because you'd be essentially starting from scratch. Instead you should use one of the existing path tracers around. There are also some, that are GPU accelerated.

How lighting in building games with unlimited number of lights works?

In Minecraft for example, you can place torches anywhere and each one effects the light level in the world and there is no limit to the amount of torches / light sources you can put down in the world. I am 99% sure that the lighting for the torches is taken care of on the CPU and stored for each block and so when rendering the light value at that certain block just needs to be passed into the shader, but light sources cannot move for this reason. If you had a game where you could place light sources that could move around (arrow on fire, minecart with a light on it, glowing ball of energy) and the lighting wasn't as simple (color was included) what are the most efficient ways to calculate the lighting effects.
From my research I have found differed rendering, differed lighting, dynamically creating shaders with different amounts of lights available and using a for loop (can't use uniforms due to unrolling), and static light maps (these would probably only be used for the still lights). Are there any other ways to do lighting calculations such as doing what minecraft does except allowing moving lights, or is it possible to take an infinite amount of lights and mathematically combine them into an approximation that only involves a few lights (this is an idea I came up with but I can't figure out how it could be done)?
If it helps, I am a programmer with decent experience in OpenGL (legacy and modern) so you can give me code snippets although I have not done too much with lighting so brief explanations would be appreciated. I am also willing to do research if you can point me in the right direction!
Your title is a bit misleading infinite light implies directional light in infinite distance like Sun. I would use unlimited number of lights instead. Here some approaches for this I know of:
(back) ray-tracers
they can handle any number of light sources natively. Light is just another object in engine. If ray hits the light source it just take the light intensity and stop the recursion. Unfortunately current gfx hardware is not suited for this kind of rendering. There are GPU enhanced engines for this but the specialized gfx HW is still in development and did not hit the market yet. Memory requirements are not much different then standard BR rendering and You can still use BR meshes but mathematical (analytical) meshes are natively supported and are better for this.
Standard BR rendering
BR means boundary representation such engines (Like OpenGL fixed function) can handle only limited number of lights. This is because each primitive/fragment needs the complete list of lights and the computations are done for all light on per primitive or per fragment basis. If you got many light this would be slow.
GLSL example of fixed number of light sources see the fragment shader
Also the current GPU's have limited memory for uniforms (registers) in which the lights and other rendering parameters are stored so there are possible workarounds like have light parameters stored in a texture and iterate over all of them per primitive/fragment inside GLSL shader but the number of lights affect performance of coarse so you are limited by target frame-rate and computational power. Additional memory requirements for this is just the texture with light parameters which is not so much (few vectors per light).
light maps
they can be computed even for moving objects. Complex light maps can be computed slowly (not per frame). This leads to small lighting artifacts but you need to know what to look for to spot it. Light maps and shadow maps are very similar and often computed at once. There are simple light maps and complex radiation maps models out there
look Shading mask algorithm for radiation calculations
These are either:
projected 2D maps (hard to implement/use and often less precise)
3D Voxel maps (Memory demanding but easier to compute/use)
Some approaches uses pre-rendered Z-Buffer as geometry source and then fill the lights via Radiosity or any other technique. These can handle any number of lights as these maps can be computation demanding they are often computed in the background and updated once in a while.
fast moving light sources are usually updated more often or excluded from maps and rendered as transparent geometry to make impression of light. The computational power needed for this depends on the computation method the basic are done like:
set a camera to the larges visible surfaces
render scene and handle the result as light/shadow map
store it as 2D or 3D texture or voxel map
and then continue with normal rendering from camera view
So you need to render scene more then once per frame/map update and also need additional buffers to store the rendered result which for high resolution or Voxel maps can be a big chunk of memory.
multi pass light layer
there are cases when light is added after rendering of the scene for example I used it for
Atmospheric scattering in GLSL
Here comes all multi pass rendering techniques you need additional buffers to store the sub results and usually the multi pass rendering is done on the same view/scene so pre-rendered geometry is used which significantly speeds this up either as locked VAO or as already rendered Z-buffer Color and Index buffers from first pass. After this handle next passes as single or few Quads (like in the Atmospheric scattering link) so the computational power needed for this is not much bigger in comparison to basic BR rendering
forward rendering vs. deferred-rendering
in a google this forward rendering vs. deferred-rendering is first relevant hit I found. It is not very good one (a bit to vague for my taste) but for starters it is enough
forward rendering techniques are usually standard single pass BR renders
deffered rendering is standard multi pass renders. In first pass is rendered all the geometries of the scene into Z buffer, Color buffer and some auxiliary buffers just to know which fragment of the result belongs to which object,material,... And then in the next passes are added effects,light,shadows,... but the geometry is not rendered again instead just single or few overlay QUADs/per pass are rendered so the next passes are usually pretty fast ...
The link suggest that for high lights number is the deffered rendering more suited but that strongly depends on which of the previous technique is used. Usually the multi pass light layer is used (with is one of the standard deffered rendering techniques) so in that case it is true, and the memory and computational power demands are the same see the previous section.

How hard would it be to write this shader? Multipoint shadow lighting

So, I have a simple project right now. Basically its just a bunch of cuboids that are all axis alligned... so it has really simple geometry.
Anyway I am considering adding a better shader to it. Currently I am using the "flat shader" that is a stock shader in GLShaderManager. It is coloring everything with a flat color. However I would love if I could build a shader like the following.
Basically I want a shader that has an array of point lights at various positions with varying intensities.
Probably defined like this.
struct Light {
float x;
float y;
float z;
float intensity;
};
Light Lighting[20];
And basically based on the level geometry and lights, I would love to simulate basic lighting and shadows, also it would be cool to have a circle under the player (like the player is actually their).
How hard would this be to make? How would I pass it my level geometry and light array. (note even though each cuboid is its own QUADS batch it will be easy to make any kind of variable that stores the data).
I am using Glew, GLTools, GLShaderManager, GLBatch, visual studio 2010, probably whatever "GSHL".
If you could just let me know how complicated a shader like this would be let me know. Also if it is easy to find a shader that works like this online if you could link it.
Also what are the difference between the two types of shaders? (Vertex, and fragment).
I would say it's relatively simple, but the thing about modern GL is that the initial learning curve is quite steep. At first it seems like you have to roll up your sleeves and learn how to do everything (essentially true) but later, it starts to seem like it made things easier than ever before with much more predictable behaviors since you're in the driver's seat.
One of the first things you want to learn to do is understand how to specify attributes from CPU to GPU. For attributes which don't vary on a per-vertex or per-fragment basis such as your light positions and intensities, you want uniform attributes. Check out examples utilizing the glUniform* functions to see how to do this. This will allow you to then experiment, passing values from the CPU side to the GPU side and then seeing how they affect the shader to accelerate your learning.
After that, it's worth learning how direct lighting is computed given a ray bouncing off a surface with phong shading, separating ambient, diffuse, and specular terms.
Later you might even want to store this light data into an environment map. That'll give you the ability to use as many lights as you want without affecting the speed of the shader.
About vertex vs. fragment shaders, vertex shaders compute things on a vertex-by-vertex basis, including data for the fragment shader to then use. The fragment shader is kind of like a pixel shader (in HLSL, it's actually just called a 'pixel shader'). It deals with rasterizing what's in between those vertices and is operating on a pixel-by-pixel basis (however with some potential overdraw). Often for lighting, the real heart of the logic will be in the fragment shader, while the vertex shader serves as an intermediary step to compute all the relevant values for the fragment shader to interpolate and use. The vertex shader is part of the 3D geometry pipeline, while the fragment shader is part of 2D rasterization.
It shouldn't take too long or be too hard to get the hang of this, but you want to approach this kind of slowly and in babysteps. There's a lot of setup work involved in establishing a lighting/shading pipeline for your software with the precise characteristics you want, and for the final work, you want to kind of plan ahead. So it's good to establish a separate scrap project and start experimenting away to figure out how things work.

GLSL How to retrieve vertex position after a shader process it?

I wrote a program that simulates soft bodies using springs. It looks nice but the problem is it consumes a lot of CPU time. So I can not run it on my laptop or any not high end PC.
I thought It would be a good idea to write a vertex shader and move the logic to the GPU. I've read some tutorials and made a toon shader so I thought (wrong) I was ready to go.
The big problem I have is that I need to know the old position of a vertex to calculate the new one. I don't know how could I retrieve a vertex position so I could send it back to the shaders each frame?
I'm not really sure is this even possible to do and maybe I'm trying to do something that shaders are never meant to do. I am still researching but I thought I could ask an see if maybe someone can help.
You can use the transform feedback mechanism if your hardware supports OpenGL 3.0 or above. There are also other techniques for getting the vertex position back, like carefully arranging your rendering so that you're writing a triangle (or point primitive) to each separate pixel on the screen. This is fairly difficult, and you need to render to a floating-point buffer, which requires FBO support.

Updating information from the vertex shader

In the vertex shader program of a WebGL application, I am doing the following:
Calculate gl_Position P using a function f(t) that varies in time.
My question is:
Is it possible to store the updated P(t) computed in the vertex shader so I can use it in the next time step? This will be useful for performing some boundary tests.
I have read some information on how textures can be used to store and updated vextex positions, but is this feasible in WebGL, since even texture access from a vertex program is unsupported in OpenGL ES 1.0?
For a more concrete example, let us say that we are trying to move a point according to the equation R(t) = (k*t, 0, 0). These positions are updated in the vertex shader, which makes the point move. Now if I want to make the point bounce at the wall located at R = (C, 0, 0). To do that, we need the position of the point at t - dt. (previous time step).
Any ideas appreciated.
Regards
In addition to the previous answers, you can circumvent vertex texture fetch by PBOs, but I do not know, if they are supported in WebGL or GLES, as I have only desktop GL experience. You write the vertex positions into the framebuffer. But then, instead of using these as vertex texture, you copy them into a vertex buffer (which works best via PBOs) and use them as a usual vertex attribute. That's the old way of doing transform feedback, which I suppose is not supported.
There's no way to store anything in the vertex shader. You can only pass values from it to the fragment shader and write those to the framebuffer pixels. And as you said, vertex texture fetch isn't universally supported (for instance, ANGLE started supporting it only a few days ago), so even that is a bit unworkable.
You can do two things: either do all the position math in JS and pass in the p1 and p0 as uniforms. Or keep track of the previous time value and do the position math twice in the shader, both for t1 and t0 (shouldn't have much of a performance impact unless you're vertex shader -bound).
Is your dt a constant? if so you could retrieve the previous position for your point by evaluating
R(t-dt). If it is not a constant then you could use a uniform to pass it along on every rendering cycle.