I made 2 examples of rendering (here I only render traingles with a single texture).
The first uses forward rendering : I simply draw triangles directly to the swapchain with a simple fragment shader:
outColor = vec4(texture(texAlbedo, fragTexCoord).rgba);
The second uses deferred shading : a first pass draw the scene with the texture and a second pass copy the result pixels to another texture. The second pass only uses a compute shader.
imageStore(resultImage, ivec2(gl_GlobalInvocationID.xy), vec4(albedo.bgr, 1.0));
I think both results should be the same but the second get rendering problems.
The result of the first example :
The result of the second example:
I don't understand this problem. Thank you for your help ! :)
Thank you for your responses.
The problem was that in the second example, I was creating image views without mipmap levels.
Related
EDIT:
My question was unclear at first, I'll try to rephrase it:
How do I use different shaders to do different rendering operations on the same mesh polygons? For example, I want to add lighting using one shader and add fog using another shader. I need to use the color interpolated from the first shader in the calculation of the second shader, but I don't know how to do it if I can't (or rather not supposed to) pass around the color buffer between shaders.
Also (and that was where my question started), I need the same world-view-projection calculations for both shaders, so am I supposed to calculate it in every shader seperatly? Am I supposed to use one big shader for all my rendering operations?
Original question:
Say I have two different shader programs. The first one calculates the vertex positions in the vertex shader and does some operations in the fragment shader.
Let's say I want to use the fragment shader to do different calculations, but I still want to use the same vertex positions calculated by the first vertex shader. Do I have to calculate the vertex positions again or is there a way to share state between different shader programs?
you got more options:
multi pass
this one usually render the geometry into depth and "color" buffer first and then in next passes uses that as input textures for rendering single rectangle covering whole screen/view. Deferred shading is an example of this but there are many other implementations of effects that are not Deferred shading related. Here an example of multi pass:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js?
In first pass the planets and stars and stuff is rendered, in second the atmosphere is added.
You can combine the passes either by blending or direct rendering. The direct rendering requires that you render to texture each pass and render in the last one. Blending is changing the color of the output in each pass.
single pass
what you describe is more like you should encode the different shaders as a functions for single fragment shader... Yes you can combine more shaders into single one if they are compatible and combine their results to final output color.
Big shader is a performance hit but I think it would be still faster than having multiple passes doing the same.
Take a look at this example:
Normal mapping gone horribly wrong
this one computes enviromental reflection, lighting, geometry color and combines them together to single output color.
Exotic shaders
There are also exotic shaders that go around the pipeline limitations like this one:
Reflection and refraction impossible without recursive ray tracing?
Which are used for stuff that is believed to be not possible to implement in GL/GLSL pipeline. Anyway If the limitations are too binding you can still use compute shader...
I have a program that displays a color surface. Then through some method (which is the focus of my thesis but unimportant here) I closely recreate the color surface. So this gives me two copies of the color surface and I want to find the 'difference' between the two outputs, to see how closely they resemble each other. So loosely speaking I want to render something like
abs(render_1 - render_2)
Because of the complicated structure of both color surfaces I can not directly calculate the difference before rendering. Is there some way that I can use GLSL shaders to do this? I was hoping that it is possible to first render one surface, then in the second render pass use a shader that queries the color already present at the render location, but I do not think this is possible. Any thought on how to do this?
It is possible. You can render first surface to frame buffer and then query value of pixel from this texture at secord render pass. Since color is 4d vector you can calculate distance between 1-st pixel fetched from texture and 2-nd pixel calculated in shader. Since difference will have been found you can calculate and visualize SNR.
Render each version into its own texture using a FBO, then in a third pass you can evaluate the difference between the values in the rendered pictures (using a shader).
I am exploring some GLSL and have something I want to try to implement. Here is the situation:
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Any help is greatly appreciated, thanks :)
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Yes, it is possible. This is essentially a shadow-map, but now you'll have to calculate the distances manually during the sampling. It's unclear why you insist on storing the world-space XYZ coordinates and what's the use-case of this. It should be much simpler and more efficient to store the depths in a depth texture and use the built-in depth-texture lookup.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Yes. You can render a texture and then use imageLoad and imageStore (and related APIs) in another shader to modify it. You must be careful, however, with feedback loops. Because of the parallel nature of the GPUs, and their cache-incoherent architecture, it might be complicated and a detailed answer would depend on the exact thing you're trying to achieve.
I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().
In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)